instruction
stringlengths 40
28.9k
|
---|
I have followed the tutorial for razor IMU and it worked perfectly when the IMU is directly connected to the PC. Currently, i am trying to interface the 9 dof razor imu with the arduino uno by simply connect the rx to tx and tx to rx. Sadly, it doesn't work! So, i am just wondering, has anyone done this before? or can anybody give me some hints? Much appreciated!
|
Are any commercially available compound vision sensors available?
Not a simple 8 sensor system using photo-diodes but a genuine sensor that can provide a >32x32 compound matrix. Would some form of reduction in the granularity of a megapixel camera be a better option? The real purpose is to reduce processing time to a minimum, while extracting the maximum basic information.
|
I'm looking for an arduino compatible depth sensor NOT for water. What I need is a sensor very similar to an Xbox kinect (but much smaller) that will tell me what is in front of the sensor and also the shape of the object. For example, if I place a cylindrical water bottle in front of the sensor I would like to be able to figure out how far away the bottle is and also the shape of the object (in 2d, I don't need to know whether it is actually a cylinder only the general shape). The sensor only needs to be accurate at most 1 meter away. Does this exist and if so where can I purchase one. If it does not exist wholly what pieces do I need to buy to put it together? Thanks.
|
When I use a standard manual vacuum, I often notice that I have to pass over a spot several times because a single pass does not necessarily catch all the dirt. My eyes/brain can easily perceive this information visually, but I don't know how an autonomous robot vacuum can detect whether a pass over a patch of dirt was successful or not. What kind of sensor/action can I use to determine if the robot vacuum successfully picked up the dirt from a particular patch?
I would prefer to avoid a visual camera if at all possible because it would necessarily have to be mounted above the robot and thereby limit the range of reachable locations. Is there some other low-cost sensor that can accomplish the same task that can be placed low to the ground?
|
I know that Occupancy Grid Mapping requires the assumption that the robots' pose is always known when generating a map. However, I also know that in reality, position and orientation usually treated as uncertain in practical applications. Assuming that my target mapping environment is inside a home, is there a way I can overcome inherent robot pose uncertainty in a real world application of Occupancy Grid Mapping without resorting to implementing a SLAM? That is, what is a low-cost way to increase the certainty about my pose?
Or is Occupancy Grid Mapping only useful in theory and not in practice?
Update:
It is clear to me, from the responses given, that occupancy grid mapping is just one possible way to represent a map, not a method in and of itself. The heart of what I really want to know is: Can mapping be done without also solving the localization problem at the same time (i.e. SLAM) in real life applications?
|
I want to use brushless for my line follower.
The problem is most ESCs don't accept more than 400-500 updates/s due to the characteristic of steering signal.
Is there a way to overcome this with a custom flash or am I out of luck?
|
I am struggling to find good links to the use of goal babbling in SLAM applications. Has this technique been used as a method for optimizing movement in a SLAM environment?
|
How can I power a wheel but let it spin freely when not under power?
I saw the question How can I modify a low cost hobby servo to run 'freely'? but I'm more interested in knowing if there is some sort of gearbox that disengages ( moves to 'neutral' ) when no torque is being applied to it.
Two ideas that come to mind are:
A drive gear on a spring loaded arm with a nominal amount of resistance before engaging. Perhaps when under power it would first use power to move in one direction, then it would engage with another gear, but without power the spring would return it to a non-engaged position
A centrifugal clutch - although I'd like something that works at low RPMs as well
The idea is to create a small bot that can move up and down a track, but if someone interacts with it when not under power it will just roll and won't damage the gearbox.
|
I have the quadcopter in the photo below. It has rotate theta degrees about the -y axis. I want to get the x and z components in the local frame for the weight W which always points along the vertical downward.
We simply have:
Wx = W sin(theta); Wz = W cos(theta);
Suppose that W = 4N and theta = 30 deg, then:
Wx = -4 * sin(-30) = 2N; Wz = -4 * cos(-30) = -3.464N
The negative sign in the angle was put because the rotation is about the -y axis (counterclockwise).
Wz seems correct as it is pointing towards the negative local z axis but Wx is 2 which seems wrong because according to the diagram it is supposed to be -2 indicating that it point towards the negative local x axis.
What's wrong with my simple calculation?
EDIT:
Using rotation matrices, we have the following rotation matrix when pitching (rotating about y axis):
This matrix is used to transform vectors from inertial frame Xn,Yn,Zn to local frame Xb,Yb,Zb. To find the components of the weight W, we can multiply this matrix by W. Doing so, we get the same result:
Wx = W sin(theta); Wz = W cos(theta);
|
I have found a continous control in the following form:
$$
u(s) = \left( K_{p} + \frac{K_{i}}{s} + K_{d} \frac{N}{1 + \frac{N}{s}} \right)e(s)
$$
but since I need it to "convert" in a digital control I need something like:
$$
y_{k} = y_{k-1} + q_{0}e_{k-1} + q_{2}e_{k-2}
$$
or everything that I can use in a digital way.
Is there any algorithm to achieve such transformation?
Actually the problem is the term $N$ in the equation. At first I thought that it was a simply PID controller but the N term is far from my understanding
Thank you very much and happy Christmas!!
|
I want to make a robot arm where a joint is powerful enough to lift 8 kg up at a distance of 1 meter.
This requires torque $tau = r*F = r*m*g$ = about 80 nm.
So now I am trying to find the requisite parts to put this together, i.e. motor + gears and then controller.
It seems that I will require a very high gear ratio to make this happen. For example this motor: http://www.robotshop.com/ca/en/banebots-rs-550-motor-12v-19300rpm.html has stats:
stall torque = 70.55 oz-in = 0.498 nm
no load roation speed = 19300 rpm
To get my required torque, I need a gear ratio of 80/0.498 = about 161:1 (and the max speed drops to about 120 rpm).
My questions:
1) Do my calculations so far seem correct? It seems a bit suspect to me that an $8 motor with some gears can cause a 17.5lbs dumbbell to rotate about a circle of radius 1m twice a second (I'm barely that strong). This type of torque would be awesome for my application, but perhaps I'm missing something in the calculations and being too optimistic (e.g. efficiency).
2) Is it safe to operate a motor at such a high gear ratio? Gears are small, and I'm worried they'll easily crack/break/wear down quickly over time. Does anyone know of example gears that I should consider for this?
Thank you very much for any help, cheers.
|
First of all please see this video : http://www.youtube.com/watch?v=n0dkn4ZIQVg
I think there is ony one stepper motor -or servo- working in the mechanisim. But as you can see each flip counter works alone and separately.
It is not like classical counter mechanism like this : http://www.youtube.com/watch?v=rjWfIiaOFR4
How does it works?
|
I will be beginner need a help-I want to gain knowledge about robotics so it need a basic theoretical knowledge what is the best way to start?
|
Help! I have recently installed leJOS NXJ on to my NXT brick, and soon after my batteries died. I inserted new ones, and now I cant start my brick up. When I press the startup button(orange) it makes a clicking sound and when I let go it stops. I have tried reflashing the brick with both leJOS NXJ and the NXT software and both programs say something along the lines of "unable to locate brick." Any suggetions?
|
As far as i can tell, the markov assumption is quite ubiquitous in probabilistic methods for robotics and i can see why. The notion that you can summarize all of your robot's previous poses with its current pose makes many methods computationally tractable.
I'm just wondering if there are any classic examples of problems in robotics where the markov assumption cannot be used at all. Under what circumstances is the future state of the robot necessarily dependent on the current and at least some past states? In such non-markovian cases, what can be done to alleviate the computational expense? Is there a way to minimize the dependence on previous states to the previous $k$ states, where $k$ can be chosen as small as desired?
|
What is a good website for buying 1.5V continuous motors? I'm looking to build a clockwork robot but I cannot find a motor small enough to fit inside my power box and I don't want to mount it outside of the box. I have a 1" by .75" space that the motor needs to fit into. I have found a few websites but they look sketchy and none of them have good reviews.
|
I know that this isn't a programming question, but it is robotics so I thought you could all be flexible since it's my first question?
Anyway. I love making robots using robot kits that come with instructions. It's always fun to use afterwards because of the controllers I build with it.
The problem is that I can't find anymore robots. They are all either too expensive, not what I'm looking for, or both.
Can anybody give me links to some good robot kits?
My price limit is £30 - £40.
Here are the three robot kits I have built. I need kits that are like these:
Robot Arm: http://www.amazon.co.uk/gp/aw/d/B002HXTONC/ref=mp_s_a_1_9?qid=1419721030&sr=8-9&pi=AC_SX110_SY165
Remote Control Robot Beetle: I can't post more than two links. Go to maplins and type in the name of the robot and you'll find it. It's a rover version of the robot arm.
3-in-1 All Terrain Robot Kit: http://www.maplin.co.uk/p/3-in-1-atr-all-terrain-robot-n12dp
I don't want to program this robot. I want it to be like it is in the examples above. Buy the kit, read the instructions, then build it.
Thank you all in advance!
PS. Any further information will be given if asked for.
|
I am reading some theories related to rigid body motion from the book "A Mathematical Introduction to Robotic Manipulation" by Prof. Richard Murray.
I am focusing on chapter 2, Sec 4 to derive some formulation. According to his introduction of chapter "we present a modern approach treatment of the theory of screws based on linear algebra and matrix groups". I myself feel rather understandable and comprehensive explanation from this approach.
However, his scope in this chapter is limited in inertia coordinate frame where he refers to as spatial frame and moving frame as body frame. Is there any other references that treat the topic in the reversed order? spatial as moving/non-inertia frame and the other one is inertia frame?
Thank you!
|
In the dynamic model of the robot, it is obvious that we found the torques as functions of the angular acceleration of the joint as well as the linear acceleration of the link center of mass along the three axies.
My question is regarding the values of these accelerations. In general, step motors specifications do not give the acceleration.
Thank you
|
I would like to implement the joint compatibility branch and bound technique in this link as a method to carry out data association. I've read the paper but still confused about this function $f_{H_{i}} (x,y)$. I don't know exactly what they are trying to do. They compared their approach with The individual compatibility nearest neighbor (ICNN). In the aforementioned method we have this function $f_{ij_{i}} (x,y)$. This function simply the inverse measurement function or what they call it in their paper the implicit measurement function. In Laser sensor, given the observations in the polar coordinates, we seek via the inverse measurement function to acquire their Cartesian coordinates. In ICNN, every thing is clear because we have this function $f_{ij_{i}} (x,y)$, so it is easily to acquire the Jacobian $H_{ij_{i}}$ which is
$$
H_{ij_{i}} = \frac{\partial f_{ij_{i}}}{\partial \textbf{x}}
$$
For example in 2D case and 2D laser sensor, $\textbf{x} = [x \ y \ \theta]$ and the inverse measurement function is
$$
m_{x} = x + rcos(\phi + \theta) \\
m_{y} = y + rsin( \phi + \theta )
$$
where $m_{x}$ and $m_{y}$ are the location of a landmark and
$$
r = \sqrt{ (m_{x}-x)^{2} + (m_{y}-y)^{2}} \\
\phi = atan2\left( \frac{(m_{y}-y)}{(m_{x}-x)} \right) - \theta
$$
Using jacobian() in Matlab, we can get $H_{ij_{i}}$. Any suggetions?
|
I am cautiously moving forward with my new iRobot Create 2, planning on using a Raspberry Pi with ROSberry installed to control the Create 2. Discovered a problem with the pin out specs between the iRobot Roomba Open Interface (OI) Specification and the Create 2 Serial to 3.3V Logic document. Here is the discrepancy (marked by DISCREPANCY):
PIN ((OI)) Serial to 3.3V
Vpwr Roomba battery voltage
Vpwr Roomba battery voltage
RXD Roomba TX DISCREPANCY
TXD Roomba RX DISCREPANCY
BRC Ground DISCREPANCY
GND Ground
GND Roomba BRC DISCREPANCY
The discrepancy is with pins 3,4,5 & 7.
Don't want to fry my Raspberry Pi, any clarification and/or help appreciated.
|
The hole is a slightly too big for the motor's shaft. I thought about hot gluing them together.
Link to picture here
|
I have an application that needs a XBee and another module to be turned on and off digitally via a microcontroller.
The Setup is 2 XBee's and an application board is connected to the Microcontroller. On PowerON I need 1 Xbee and the Microcontroller to come on and do its routines. After the uC gets the signal from the Xbee (wirelessly from a baseStation) the board has to turn on the other XBee and application board. And when the operation is over, the XBee and Board is to be powered back down. I dont want to put them in sleep or low power state, just power both those devices off.
I was thinking of using a relay. But i cannot find a 3.3v 1A SMD Equivalent system. I am looking for a SMD type of footprint to go on a very compact board.
What options do i have?
The XBee needs around 1A Power and the Application board 500mA.
|
I've develop a quadrotor (only simulation on my PC using ROS) and the feedback controller resumes more or less the following structure:
where you can think that process is the dynamic movement of the quadrotor (motion equations), the inner loop is the attitude controller (it just sets the orientation about all 3 axis) and the outer loop is the position controller that takes care where the quadrotor actually is. Why are they separated? Because in many papers I found out that the attitude controller (pitch, roll, yaw) need to run at higher frequency then any other controller in the system. The position controller instead needs to run at lower frequency.
The following picture is a better explanation of my description. Don't be scared...it is more simpler than one could think:
Now I did it as in the paper. BUT I discovered that my quadrotor was really unstable and I spent days and days trying to correct the gains of the controller without getting a stable system. My intuition said to me that maybe they are running at wrong frequency, so I put a different frequency values for the position controller being sure it is not a multiply of the main frequency (something like 1000Hz and 355 Hz for example.)
Lately I removed the timer in my program (C++) and let the position controller run at the same frequency as the attitude controller just because I run out of ideas and suddenly worked everything nice.
So here is my question. What should I consider when my system has an outer/inner controllers? How to be aware of that?
Regards and happy new year!!!
|
I'm trying to understand the source code of ArduPlane. The MAVLink message is decoded using a set of _MAV_RETURN_???? functions, e.g. _MAV_RETURN_float
When I grep recursively for _MAV_RETURN_float, I could not find where it is defined. I wonder if I'm missing anything.
UPDATE
Here is the source code of Ardupilot, including ArduPlane.
https://github.com/diydrones/ardupilot
|
I've been working on a quad copter for awhile now, recently I've finished the interface for PID tuning and its leading me to question several design decisions.
The quad uses a RaspberryPi as its pilot, the entire loop takes less than 20ms. IMU data is gathered, the throttle speeds are calculated, and then finally sent to an Arduino(micro) over an SPI interface. Where they are analogWrite(...), to each ESC.
Can a quadcopter fly with a loop that slow? 20ms = 50Hz?
|
At first happy new 2015!!!
I'm looking for my next simulator development: a Tanker is flying at constant speed (350 Knots) (no acceleration, no change of altitude or direction). The Tanker is approached from behind by a UAV which needs to refuel or transfer data through a wire. The UAV knows the direction, the speed and relative position from the tanker in order to approach it smoothly. It knows that at about 5 m from the tanker is the contact successful.
Here a picture I found on internet but it is clear more than thousand words:
To achieve the task I thought to implement a "simple" PID which controls the position and the velocity, but for this I have in my mind two different designs approaches:
Solution one: the motion equation of my system provide the position $x,y,z$ and velocity $Vx, Vy, Vz$ of the UAV (to simplify things I will consider just $x$ but of course $y,z$ must be eventually considered too). Those are feedback with the desired position (5m) and velocity (350 Knots) of the tanker. The feedback line is separated for each state and PIDs are working quite indipendently as in the following picture:
please note that to simplify things I never considered the acceleration.
Solution two: this is the most tricky one and I was yesterday thinking about it all the time. In this case only one state vector is going to be feedback to the desired setpoints. In this case I would feedback only the velocity then integrate it and feed the result into the second PID. Maybe the following picture is clearer:
But here I'm not really sure if the second idea is conceptually wrong or could be affordable. I'm pretty sure that the first one is working and leads to good result, but I was wondering if the second one is affordable or is not recommended for a control design.
Regards
|
So I'm in the process of building my robot and it has encoders on every wheel measuring speed and position and a compass sensor measuring heading.
I have 3 seperate PID loops at the moment, I can either control the robots speed or I can control the robots position or I can make it follow a heading using a line following type algorithm.
Now I am completely stuck on how to merge these control loops, how do I control the robots heading AND its speed. So I can say to it, "go at 20 degrees at 3m/s" or "go at 45degrees for 5 metres then stop"
Really I would like to be able to control all 3, heading speed and position so I could say "go at 20 degrees for 10 metres at a speed of 5m/s" however I have no idea how to merge the loops.
At the end of the day there are 3 inputs (heading, speed and position) and 1 output (PWM to motors) so do I need to use some kind of MISO control scheme, if so which one?
I've looked at cascaded control but that only accepts 1 set point where I want to set 3 different set points. Maybe some kind of state machine? I'm so stuck!!
|
I have some questions regarding building a gearbox for a motor to create a robot actuator. Given my desired motor and loads, I've determined that a gear ratio in the 400-700 range is adequate for my application.
Here is my motor choice for reference: http://www.robotshop.com/ca/en/banebots-rs-550-motor-12v-19300rpm.html
Here are my questions:
1) Mounting gears to a motor: if I have a motor with a shaft diameter of 0.12in (3.2mm), what size gear bore should I use, and how do I attach a gear to the shaft in practice? I'm not that mechanically inclined (yet).
2) Say I build a gearbox of ratio 625:1, as such: https://www.youtube.com/watch?v=lF-4qVBWy88 I have no idea of how "durable" such a set up would be. For my application, I am looking at moving an 8kg mass from 0.6 meters away, coming out to a total torque of 47 newton meters. How can I tell if the gears will break or not?
For reference, these are the gears I'm looking at (and I'm pretty sure they're the same ones in the video): http://www.vexrobotics.com/276-2169.html
3) Assuming those gears above were to fail, what type of gear material would be recommended for my load application of max 47nm?
4) What efficiencies can one expect from gears of different types? I've been assuming 50% conservatively as another answer mentioned.
Thank you for any help, and please let me know if anything was unclear.
|
A friend and colleague of mine who studies robotics says that bipedal robots present much greater challenges for stability and locomotion than those with more legs.
So why is there so much effort to develop bipedal robots? Are there any practical advantages?
Of course, I see the advantage of having arm-like free appendages, but it seems like 4 legs and 2 arms would generally be a better design.
|
I would like to control the position and velocity of a DC motor like this one (ZY MY 1016 - 24V - 10A - 200W). The motor comes with a 13 TOOTH SPROCKET and the diameter of the shaft is 6mm. It's not possible to apply an encoder to the back of the motor (see picture)
The angular velocity in the description is 2750 RPM, which encoder do you recommend?
|
I'm attempting to customise some code for my DIY pentacopter frame.
To that end, i've modified the some existing code, and saved it under AP_MotorPenta.cpp and AP_MotorsPenta.h . I'm currently trying to upload the code onto my flight controller, but am currently unable to do so due to the following problems.
Problems
Unable to upload to my APM 2.6 ( #1)
Unable to select my pentacopter frame. (#2)
Problem (#1)
I've saved my customised files in the AP_Motors library, and have compiled the Arducopter 3.2 code in ArduPilot-Arduino-1.0.3-gcc-4.8.2-windows , after which i upload it using mission planner. However, when i am uploading the hex file, i get the following error
"Uploaded Succeeded, but verify failed : exp E2 got 60 at 245760"
However, when i try uploading it directly from the modified Arduino IDE, i get a series of warnings , followed by the messages
avrdude:verification error, first mismatch at byte 0x3c000 0x60 !=
0xe2 avrdude: verification error; content mismatch
followed by the message
" avrdude done.Thank you. "
Does this mean that the uploading of the firmware to my flight controller is successfull? Also, is there any difference between uploading via mission planner and the modified Arduino IDE?
Problem #2
In the mission planner, originally there is the option to choose one of several frames, (i.e Quad/HexaOcto) etc. After uploading my firmware, how would i go about selecting my penta frame for use?Also is there any further thing that i would have to do?
Apologies in advance if the questions are rather inane, as i have little programming experience to speak of.
I would really appreciate any help i can get.
Thanks in advance !
|
I am planning to develop a monocular visual odometry system. Is there any indoor dataset available to the public, along with the ground truth, on which I can test my approach?
Note: I am already aware of the KITTI Vision Benchmark Suite which provides stereo data for outdoor vehicles.
If anyone has access to the datasets used in the following paper [SVO14], it would be even more great: http://rpg.ifi.uzh.ch/docs/ICRA14_Forster.pdf
|
I am trying to design a robot to lift tote-crates and transport them around in a localized area. I want to be able to carry 3 tote-creates at a time. This robot needs to be able to pickup the creates. I only want the robot to carry three at a time so keep is small and mobile. I was thinking of a design with a central lift that could carry the crates. What would you suggest as a simple ingenious way to create this robot?
|
I'm absolutely fascinated by the notion of a driverless car. I know there is a lot involved with it and there are many different approaches to the problem.
To narrow the scope of this question to something reasonable for the SE network, i'm curious to know if there is a common sequence of subproblems that every driverless car needs to solve at each timestep to make an autonomous car possible for real life, point to point transportation possible. I imagine that once the starting point and target destination on a given map are set, a self driving follows an algorithm that loops through certain operations to solve certain problems along the way. I'm more interested in knowing what those problems are specifically at a high level, rather than detailed algorithms to solve them. Do all self driving cars solve the same subproblems along the way?
|
Power Block Designing noob here.
I have a beaglebone 2x XbeePro(s) and another 500mAh device connected to the board i am building a PCB Around.
I need some advice on weather to use Linear Voltage Regulation vs Switching mode regulation.
Secondly if i am using linear voltage regulation setup do I need multiple regulators for the different devices?
My plan is to use A 2S 1000mAh Battery -> Fuse -> 2x 1.5A LM1084's in parallel output feed to the beaglebone and a LM3940 for both the XBees. Or its better to have each XBee on its on LM3940 drawing power from a seperate LM1084?
Linear Regulators tend to get hot on full load, hows the performance of Switching mode regulators ?
|
I've seen lots of examples on how to communicate from Arduino to the computer, but the few that talked about computer to Arduino communications were very hard to understand.
My question is: What code can I use to control my arduino Uno with my keyboard
if it helps, I'm trying to set up a WASD steering behavior.
|
I'm working on a project requiring HD (Stereo) Video Processing. Most of High Resolution (5MP+) Sensors use MIPI-CSI interface.
I managed to get a board with an Exynos5 SoC. The SoC Itself has 2 MIPI-CSI2 interfaces, the problem is that the pins to those interfaces are not exposed and It's (almost) impossible to reach them. So I decided to use the USB3.0 Buses.
The problem is when I get to Significant bandwidth (~5.36 Gibibits/s per sensor), I don't think USB3.0 will work out. Bandwidth = Colordepth*ColorChannels*PixelCount*FPS but this could be solved with a Compressed stream (via a coprocessor)
I was thinking that Cypress' CYUSB306X chip was a good candidate for the job, but one of the problems is that I can't do BGA Soldering by hand nor have been able to find a BGA Soldering Service in Switzerland.
Any Ideas on other interfaces I could implement or other coprocessors with MIPI-CSI2 Interface?
Just a final remark, space and weight are important as this is supposed to be mounted on a drone.
|
I am planning to use 2.4Ghz XBeePro 63mW Devices for a project that requires a coverage area of around 1.5-2km.
When i go to select an antenna there are various options like Circular,Virtical, Horizontal polarized etc.
Which antenna would give a coverage for a field? I cant have it directional (one point to another point). By devices will be moving around on a field.
What type of polarization is recommended for this kind of a setup? My Base XBee will be on a elevation of around 40m from the ground so i have a clear line of sight for all the moving modules.
There are going to be around 20-30moving modules streaming data at around 2-5readings per second.
A +12dBi Antenna should suffice the application? And what about polarization?
|
I'm building a quadcopter for my final year project. I have a set of equations describing attitude and altitude but they involve $I_{xx}$, $I_{yy}$ and $I_{zz}$. None of the papers I have read describe how these are calculated. they simply choose it before their simulation. Can anyone help?
|
I'm trying to get an arduino to talk with a beaglebone black.
I have followed this tutorial for getting ttyO4 open on the BBB and used the following command on to set the serial line correctly:
wiring is set up according to this tutorial.
stty -F /dev/ttyO4 cs8 9600 ignbrk -brkint -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke noflsh -ixon -crtscts
next data is sent using the following method:
echo s123 > /dev/ttyO4
the arduino uses the followingvoid loop(){
code to check for serial communication:
#include <SPI.h>
void setup(){ //////////////SETUP///////////////////////
Serial.begin(9600);
}
void loop(){
if(Serial.available()>=4){
digitalWrite(12, HIGH);
delay(1000); // wait for a second
digitalWrite(12, LOW); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(12, HIGH);
byte b1,b2,b3,b4;
b1=Serial.read();
}
}
}
However it seems no message is received. It does not give any error either.
As an alternative I have also tried a variant of the code suggested in the wiring tutorial resulting in the following code:
import sys
from bbio import *
Serial2.begin(9600)
for arg in sys.argv:
print arg
Serial2.write(arg)
delay(5)
called with pyton test s123 this printed s123 but the arduino remained silent.
Edit I have now also tried to exactly follow the wiring tutorial so that gave me the following sketch:
char inData[20]; // Allocate some space for the string
char inChar=-1; // Where to store the character read
byte index = 0; // Index into array; where to store the character
void setup() {
Serial.begin(9600);
pinMode(13, OUTPUT); // digital sensor is on digital pin 2
digitalWrite(13, HIGH);
delay(2000);
digitalWrite(13, LOW);
delay(500);
}
void loop()
{
Serial.write("A");
digitalWrite(13, HIGH);
delay(100);
digitalWrite(13, LOW);
delay(100);
if (Comp("A")==0) {
digitalWrite(13, HIGH);
delay(1000);
digitalWrite(13, LOW);
delay(500);
}
}
char Comp(char* This) {
while (Serial.available() > 0) // Don't read unless
// there you know there is data
{
if(index < 19) // One less than the size of the array
{
inChar = Serial.read(); // Read a character
inData[index] = inChar; // Store it
index++; // Increment where to write next
inData[index] = '\0'; // Null terminate the string
}
}
if (strcmp(inData,This) == 0) {
for (int i=0;i<19;i++) {
inData[i]=0;
}
index=0;
return(0);
}
else {
return(1);
}
}
and on the BBB we turn on the echo script with
/PyBBIO/examples$ sudo python serial_echo.py
The effect remains that there is no error but also no data delivery.
|
I am building an estimator that solves for the camera pose relative to a reference frame which contains a known set of features and edges. Currently, the system works with an unscented kalman filter with four known points (red leds) in the reference frame. I am now hoping to improve robustness by adding edges to the model as well as robust features. I would like to add additional points that are uncovered by some opencv feature finding function (fast,cornerHarris,...).
So far I found the paper "Fusing Points and Lines for High Performance Tracking" and "Robust Extended Kalman Filtering For Camera Pose Tracking Using 2D to 3D Lines Correspondences" which seem to detail how to fuse edge and feature matching for pose estimation.
Is there a strategy to populate the known set of edges and features when it is impractical to measure them with a ruler/tape measure? My first thought is to start with a small known set of features, my red leds, then run some slam algorithm and keep all features/edges that have some minimum certainty.
Thanks a bunch!
I have misunderstood the RANSAC algorithm. This is not appropriate for my application.
For those interested, I am hoping to use a similar approach to the one presented in the following paper.
Youngrock Yoon, Akio Kosaka, Jae Byung Park and Avinash C. Kak. "A New Approach to the Use of Edge Extremities for Model-based Object Tracking." International Conference on Robotics and Automation, 2005.
|
After working for a long time on my Arduino Due, I needed a better and more powerful prototyping platform for my future projects. For which, I have placed an order for NVIDIA Jetson Tegra K1 board which runs on linux and supports CUDA based development. Being a newbie to Linux, I have no idea where to start from and what to do for getting started with code execution on the Jetson board. Please suggest the initial steps required and from where can I get familiar to Linux environment...
Thank you
|
I'm currently thinking of extending the battery life of my quad by powering each motor and ESC individually. I will be using 1 dedicated battery for each motor, and 1 dedicated battery for the flight controller itself, bringing the total to 5 batteries for the entire quad.
My thinking is that by powering each motor with a dedicated battery, given a power draw/consumption, the flight-time of my quad will be increased by 4x as each motor will have 4x the capacity to draw from. Putting the problem of weight aside, would this be a feasible idea?
Also, i am currently using just 1 battery to power all motors, and as such, i only have to plug in the single battery and i can calibrate my ESCs. How would i calibrate my ESCs if i am using dedicated batteries for my APM 2.6 and each motors?Would i be able to get away with powering my APM using the BEC on my ESCs?
|
I am confused by what precisely the term "Indirect Kalman Filter" or "Error-State Kalman Filter" means.
The most plausible definition I found is in Maybeck's book [1]:
As the name indicates, in the total state space (direct) formulation, total states
such as vehicle position and velocity are among the state variables in the filter,
and the measurements are INS accelerometer outputs and external source
signals. In the error state space (indirect) formulation, the errors in the INS-
indicated position and velocity are among the estimated variables, and each
measurement presented to the filter is the difference between INS and external
source data.
20 years later Roumeliotis et al. in [2] write:
The cumbersome modeling of the specific vehicle and its interaction with a dynamic environment is avoided by selecting gyro modeling instead. The gyro signal appears in the system (instead of the measurement) equations and thus the formulation of the problem requires an Indirect (error-state) Kalman filter approach.
I cannot understand the bold part, since Lefferts et al. in [3] write much earlier:
For autonomous spacecraft the use of inertial reference units as a model
replacement permits the circumvention of these problems.
And then proceed to show different variants of EKFs using gyro modeling that are clearly direct Kalman Filters according to Maybeck's definition: The state only consists of the attitude quaternion and gyro bias, not error states. In fact, there is no seperate INS whose error to estimate with an error-state Kalman filter.
So my questions are:
Is there a different, maybe newer definition of indirect (error-state) Kalman Filters I am not aware of?
How are gyro modeling as opposed to using a proper dynamic model on the one hand and the decision whether to use a direct or indirect Kalman filter on the other hand related? I was under the impression that both are independent decisions.
[1] Maybeck, Peter S. Stochastic models, estimation, and control. Vol. 1. Academic press, 1979.
[2] Roumeliotis, Stergios I., Gaurav S. Sukhatme, and George A. Bekey. "Circumventing dynamic modeling: Evaluation of the error-state kalman filter applied to mobile robot localization." Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on. Vol. 2. IEEE, 1999.
[3] Lefferts, Ern J., F. Landis Markley, and Malcolm D. Shuster. "Kalman filtering for spacecraft attitude estimation." Journal of Guidance, Control, and Dynamics 5.5 (1982): 417-429.
|
I am trying to build a hexapod with camera interfacing using a beaglebone black for college project. I'm not sure what power supply to give so it can power up to bot, having in mind that it should be portable (mobile) and it should power about 18 servo motors along with the camera, wifi and the processor. Your help is needed very badly as i'm nearing the deadline for the project.
|
I'm trying to build a hexapod with beaglebone in the linux environment (Im thinking of using Ubuntu). What is the best language to use for coding purpose to make robot controls, camera and wifi integration etc.
|
How would one go about passing power through a motor?
Let's say we have some basic robot which has a motor that slowly spins a limb, on each end of that limb, there is a motor which again spins a limb. Because the first motor is always going to be spinning, any wires would twist and eventually break, so a wired approach wouldn't work. The same goes for the subsequent motors.
I know that dc motors use brushes to get past this, but how is this generally solved in engineering/robotics? This must be a problem that has come up before, and there must be a solution to it.
Any ideas? :)
|
Reading some papers about visual odometry, many use inverse depth. Is it only the mathematical inverse of the depth (meaning 1/d) or does it represent something else. And what are the advantages of using it?
|
I'm not sure if this is the correct forum for this question about Automatic Control, but I'll try to post it anyway.
So, I've just started to learn about Control Systems and I have some troubles understanding Bode plots. My textbook is really unstructured and the information I find on the Internet always seem a little bit too advanced for a beginner like me to grasp, so I hope you can help me to understand this.
I would like to know what the information we find in the Bode plot can tell us about how the corresponding step responses behave. I know that a low phase margin will give an oscillatory step response and that the crossover frequency decide the rise time. But how can we see if the step response has a statical error in the Bode plot and what does the phase graph of the Bode plot actually mean?
|
I plan to use the LT1157 in my application PCB to act as a switch control from a micro controller side to control the On/Off state of 2 module boards which will be connected in the PCB.
1st Load is 5V 1A.
2nd Load is 3.3V 500mA.
The LT1157 will get a 5V input at the Vs terminal.
Does anyone know how much voltage is required to be used at the IN1 and IN2 pins? The datasheet doesn't say how much voltage can be used here. I am guessing it will be 5V, but can it do logic level with 3.3V? My microcontroller board gives an output of 3.3V and not 5V so I'll have to make a logic Level converter before feeding the pins IN1 and IN2 if it's not 3.3V tolerant.
Please confirm, if anyone has used this IC before.
|
I'm building a robot which is actually a rotating ball. All my circuitry will be inside this ball. I'm using a Raspberry Pi as the brains. Apart from Raspberry Pi, I've an H-bridge IC (L298N), a 6-axis Accelerometer + Gyroscope (MPU6050), and probably some more additional digital components. These will work with a 5V or 3.3V supply. Another set of components are electromechanical devices like a 9kg torque servo and 2 1000RPM DC motors.
Here are my questions:
Everything will work on battery. I can get a 3.3V and 5V supply from a 9V battery using L1117-3.3V and 7805 regulators respectively. I know that it's not at all reliable to share the power source of the control circuitry with high load devices like motors and servos. Should I have a dedicated separate supplies for electromechanical components and the control circuitry?
Servo will run on 6V supply and motor will run on a 12V supply. How should I go about this one? Again, separate batteries for servo and motors?
Can of this work on a single high capacity battery, somewhat like 10000mAh?
Here are some of my calculations:
Servo current (6V): at no load: ~450mA, at around 6kg load: ~800mA
Motor current (12V): at no load: ~500mA, at around 6kg load: ~950mA
RaspberryPi and other digital circuitry (5V + 3.3V): ~600mA (that includes an Xbee)
Thus, the overall current at a 6kg load (with two motors) comes around ~3.3A
It would be really awesome if this thing gets done with a maximum of 2 batteries. Else, it may get messy while placing the batteries inside the ball. Space is limited!
|
I'm studying for a test in Automatic Control and I have some troubles understanding sensitivity functions and complementary sensitivity functions.
There's one assignment from an old exam that sais
"Someone suggests that you should reduce perturbations and measurement noise simultaneously. Explain why this is not possible."
The correct answer sais:
"Since the sensitivity and complementary sensitivity transfer functions add up to 1, i.e. $S+T=1$, one cannot improve both the output disturbance and measurement error suppression at the same time."
I don't really understand this answer and my textbook is not to much help either, so I would appreciate alot if someone could explain how they got to this answer? Also, is the sensitivity function always representing the perturbations in the system and the complementary sensitivity function the measurement noise? My textbook seem to imply this, but I'm really not sure if this is always true.
|
Our goal is to drive an autonomous robot with a differential locomotion system using two identical electric motors and an Arduino Uno (1 for each wheel). From our understanding, over time the motors can lose accuracy and may result in one motor rotating more than the other.
Is there a way to account for possible error in the speeds of the motors so that the robot can end up in a very precise location?
Our thoughts were to have an indicator which would allow us to count the number of rotations of each motor and compensate if a noticeable difference began to appear.
|
Background:
I am new to PID, for my first PID project I am using a simple P-Loop and 300 degree linear potentiometers for position feedback. I am using the roboclaw 2x60A motor controller. The motor controller has 64 speeds between. Sometimes the potentiometers can vary as much as +-4 degrees when not in motion. I am using an Arduino mega with a 10bit ADC to control the motors.
My Question:
How can I filter or reduce the variance in the potentiometers? In addition, it takes a certain amount of time for the motors to react to the command, and it seems to throw off the P loop. How do I account for the latency, in my program?
Example:
For this example the P loop was run every 33-36 milliseconds.
I will tell the motor to go to 250 deg/sec, and it will go to 275 deg/sec, the P loop then reacts by lowering the value sent to the motor however the speed then increase to 400 deg/sec and then the P loop lowers the value again, then the speed will drop to 34 deg/sec.
Thanks so much for any help,
Joel
|
I am completely new to this site and robotics, but I have experience in programming, and programming microcontrollers.
I would like to create a grid of "pixels", where each "pixel" is a metal or wooden dowel that is programmed to push in and out, like a piston.
I'm imagining a lot of pixels, maybe 40x40, where each could be quite small in diameter (1/4"). The Arduino would have control over the linear movement - up and down - of each pixel.
Could anyone point me in the right direction for accomplishing this?
|
Has anyone used the XBee WiFI modules? Done a range check on them?
With my laptop i get a range of around 400m on industrial level Accesspoints on a football field, well how good are these devices ? If i get a SMA Connector version and use a higher gain antenna am I looking at ranges from 250-500m ? (Talking 18-22dBi gains here).
|
I have data from an accelerometer that measures X,Y,Z acceleration and data from a gyroscope that measure pitch, roll and yaw. How would I combine this data to find robot location and orientation in 2D or 3D space?
|
I am starting with a project using Arducopter. I am a person familiar with arduino, but seeing the arducopter for the first time. Commands codes and everything is completly different compared to normal Arduino programming. I am not getting any help or commandlist for specific purposes in arducopter. Any body can help me in leading to any links which can help me out..
|
I have the following code:
void NewCore::spin(){
ros::Rate rate(10);
int first =1;
while(ros::ok()){
if (first){
askMath();first=0;}
ros::spinOnce();
rate.sleep();
}
}
int main(int argc, char **argv){
ros::init(argc, argv, "newCore");
NewCore nc;
nc.init();
nc.spin();
}
void NewCore::init(){
mngrSub = handle.subscribe<std_msgs::String>("/tawi/core/launch", 10, &NewCore::launchCallback, this);
mngrPub = handle.advertise<std_msgs::String>("/tawi/core/launch", 100);
mathSub = handle.subscribe<std_msgs::String>("/display", 10, &NewCore::launchCallback, this);
serSub = handle.subscribe<std_msgs::String>("/tawi/arduino/serial", 100,&NewCore::serialCallback,this);
mathPub = handle.advertise<std_msgs::String>("/questions", 100);
ballPub = handle.advertise<std_msgs::Int16>("/tawi/core/ballcount", 100);
nmbrPub = handle.advertise<std_msgs::Int16>("/tawi/core/number", 100);
}
void NewCore::askMath(){
ROS_INFO("addition1d<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<");
std_msgs::String question;
question.data = "1digitAddition";
mathPub.publish(question);
}
(code that isn't interesting has been removed)
Running this causes the following output to appear:
$ rosrun glados newCore
[ INFO] [1421236273.617723131]: addition1d<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
However if I have the following code on during launch:
$ rostopic echo /questions
then it does not show me an initial message being sent.
Changing
if (first){
askMath();first=0;}
into
askMath();first=0;
does appear to work but it then sends a message every cycle rather then just the one at the start.
Does anybody know what is wrong here?
|
I'm in the early stages of working with a simple robot arm, and learning about the Jacobian and inverse kinematics.
From my understanding, the Jacobian can be used to determine the linear and angular velocity of the end effector, given the angular velocity of all joints in the arm. Can it also be used to determine the Cartesian position of the end effector, given the angles and/or positions of the joints?
Furthermore, suppose that I want to determine the required angular velocities of the joints, in order to bring about a desired linear velocity of the end effector. Can this be done by simply inverting the Jacobian and plugging in the desired parameters?
|
I've made many PCBs at home but still there are some mistakes. I tried ironing, drawing methods but it doesn't work very well. I use eagle CAD for design PCBs. Please help me.
|
For a high school project I will be building a robot that will draw an image on a whiteboard for you based on what instructions you give. To accomplish this a motor will move the pen on each axis similar to how a 3rd printer moves but without the Z axis. As far as code goes I'm fine but I was wondering if anyone could give me an insight on how to go about building the robot (i.e. what motors, best system for moving on axises etc) All help is appreciated thanks
|
I've been searching the internet for an answer to this question, but I haven't come across anything that will help me. Basically, we have a Meka Humanoid Robot in our lab, which has a shell head in which a PointGrey USB 3.0 camera is embedded. I use the pointgrey_camera_driver for obtaining the images from the camera. The head has 2 degrees of freedom (up/down, left/right). I am intending to use this camera with the ar_pose package to detect and track AR tags on objects. I understand that camera's must be calibrated for effective use (forgive me, I don't know much about camera) which I do with the camera_calibration package.
My question is: Since this camera is "mobile" meaning since the head can move so does the camera, how would I go about calibrating it? Currently, I have the head fixed at a position and I've calibrated the camera in that position and got the parameters in the yaml file which I can load for rectification. In particular, if the head moves does the calibration file that I obtained in the previous position become invalid? If so, as asked before, how would I calibrate this camera for all of its possible field's of view (which can be obtained by moving)?
This camera has different video modes and in the mode I'm using I can get a frame rate of 21Hz (i.e. after driver is launched I get 21Hz for rostopic hz /camera/image_raw). However, when I rectify the image using image_proc, I get a frame rate of only about 3Hz on rostopic hz /camera/image_rect_color. Is this normal? Is there a way for me to increase this frame rate?
Please let me know if any more information is required.
Thanks for your help!
|
Background:
I am using an Arduino Mega connected to a RoboClaw 2x60A motor driver. I asked this question about the system, but I have since narrowed the scope of the problem. I tried adding a bunch of different size capacitor between the 5v and gnd, when the RoboClaw is switched off then a 470 micro farad capacitor seems to eliminate all noise but when I turn on the RoboClaw no capacitance valued I tried, (4.7,10,100,220,320,470,540,690,1000,1100)microfarads seems to eliminate any noise. I even tried hooking up a 12v battery with a 5v regulator to the logic battery on the RoboClaw and connecting it to the ground on the Arduino. Then I tried using a separate battery for the pots and connecting the AREF to the +5v on the battery.
No matter what I try when the roboclaw is on the potentiometer value will vary as much as +-6 degrees. I found the degrees using:
map(analogRead(A0),0,1023,0,300)
In addition I took a bunch of data and graphed it and found that if I took 25 instantaneous data points and averaged them together it would help significantly reduce the variance. I chose 25 because it take 2.9 ms, 100 worked really well but it took 11 ms. To help explain the averageing of analog read, here is my code:
unsigned int num = 0;
for (int i = 0; i<25; i++){
num+=analogRead(A0);
}
potReading = num/25;
My Question:
What is my next step in eliminating this noise? Is there a formula I can use to find a better capacitance value? Should I try putting capacitors on each potentiometer between 5V and gnd? Any other IC I should try to help with this? On my previous question someone mentioned optocouplers, what size would work best and where in the circuit do they go? Is there code I can write to help eliminate the size of the variance beyond what I have written?
Thanks so much for any help,
Joel
|
I would like to know the simple difference between kinematic, dynamic and differential constraints in robotic motion planning.
|
I would like to know the difference between state space and control space in relation to motion planning. I would like a simpler explanation.
|
I'm fairly new to 3D printing. I'm considering motherboards I might use to control my printer. I'm hoping to find a board that I can easily control using either:
ReplicatorG
MatterControl
I like these programs because they seem reasonably current, mature and straight-forward for beginners.
My question is can I control a Rambo V1.2 board from either of these programs? These programs don't include explicit support for the RAMBo as far as I can see, but maybe I'm missing how printing software works at this point?
What is a RAMBo?
The RAMBo V1.2 board is a creative-commons/open source design. It integrates an Arduino, Stepper-Motor drivers, Heater & Fan controls, Power Management and more.
An example implementation looks like this:
For more background info on what a RAMBo board is, you may read about it on the RepRap community wiki.
|
ATLAS Gets an Upgrade - the new video of the Atlas robot is out so I'm curious about the IDE with which they are coding this thing.
|
I have a BeagleBoneBlack and would like to use it to control a servo for my robot. I'm mostly programming in ros and as such am looking preferably for a c++ solution. Is there an easy way of controlling a servo on a BBB running ubuntu 14.04 on the kernal 3.8? Most tutorials I have tried referred to files I did not have so I'm unsure.
|
As far as i can tell, both LQR and PID controllers can both be applied to the cart-pole (inverted pendulum) problem. What are the pros/cons to using one controller over the other for this particular problem? Are there any reasons/situations where I should prefer one over the other for this problem?
|
I am hoping someone might be able to nudge me in the right direction (apologies for the long post but wanted to get all the information I have gained so far down.
Basically I am after a solution to record the path my vessel took under water for later analysis…like a bread crumb trail.
Requirements:
Ideally have a range of at least 30meters however if there were no other options I would accept down to 10meters.
Working fresh and salt water.
The vessel is (25cm x 8cm) so size and power consumption are a factors.
It would be traveling roughly parallel to the sea bed at variable distances from the sea bed (range of 0-30 meters)
Does not need to be super accurate, anything less than 5 meters would be fine.
Measurement speed range of 0 – 4 mph.
Measure direction the object was moving in (i.e. forwards, sideways, backwards)…I am planning to use a compass to ascertain N, S, E, W heading.
Options I have discounted:
Accelerometers:
This was my initial thinking but in doing some reading it seems they are not suited to my needs (unless you spend loads of money, and then the solution would end up being too heavy anyway).
Optical Flow:
Looks too new (from a consumer perspective) / complicated. I don’t know what its range would be like. Also requires additional sonar sensor.
Current favorites:
Sonar:
http://www.alibaba.com/product-detail/1mhz-waterproof-transducer-underwater-ultrasonic-sensor_1911722468.html
Simplest use is distance from object, however can use doppler effect to analyse speed of a moving object.
40m range, nice!
Presumptions:
If I fired this at an angle to the seabed I could deduce the speed the floor was ‘moving’ below which would give me the speed of my vessel?
I am also presuming that I could interpret direction of movement from the data?
I presume that the sensor would need to be aimed at an angle of around 45 degrees down to the seabed?
Laser Rangefinder:
Although it works differently to the Sonar the premise of use looks the same, and thus I have the same queries with this as I do with the Sonar above.
Presume if I mounted the sensor behind high quality glass (to waterproof it) then performance would not be impacted that much.
This is a lot more costly so if it does not give me any advantage over sonar I guess there is no point.
Water Flow Meter:
http://www.robotshop.com/en/adafruit-water-flow-sensor.html
Super low cost and simple compared with the other options, I would potentially use a funnel to increase water pressure if it needs more sensitivity at low speed.
Would then just need to calibrate the pulses from the sensor to a speed reading.
Significant limitations of this is it would be registering a speed of 0 if the vessel was simply drifting with the current….its speed over the seabed that I am interested in.
Current favorite option is sonar (with the option of using water flow meter as second data source)…however are my sonar presumptions correct, have I missed anything?
Any better ideas?
|
I am trying to implement a particle filter in MATLAB to filter a robot's movement in 2D but I'm stuck at the weight function. My robot is detected by a camera via two points, so a single measure is a quadruple (xp1, yp1, xp2, yp2) and states are the usual (x,y,alpha) to detect its pose in 2D. As far as my understanding goes I should assign a weight to each particle based on its likelihood to be in that particle position with regards to the current measurement.
I also have the measure function to calculate an expected measurement for a particle, so basically I have, for each instant, the actual measurement and the measurement that a single particle would have generated if it were at the actual state.
Assuming all noises are Gaussian, how can I implement the weight function? I kind of noticed the mvnpdf function in MATLAB, but I can't actually find a way to apply it to my problem.
|
Is there a sensor that will produce a sinusoidal signal phase locked to a high RPM (7000 RPM) shaft? I am attempting to build a coaxial helicopter based on the architecture described in this paper which requires increasing and decreasing drive torque once per revolution, and I would like to do this modulation in hardware.
|
I have stumbled upon an equation (https://i.stack.imgur.com/hv64E.png), where the probability of an occupancy grid map cell is calculated. My teacher insists that it's possible to approximate the algorithmic complexity of this long equation, but I'm not so sure.
The description of the factors used in this equation are described here on page 11 (item 26). With keeping in mind that this calculates occupancy probabilities of a 2 dimensional array from sensor measurements, is it really possible to approximate the algorithmic complexity of actually calculating occupancy with this equation in BigO by just taking a look at it and not delving much deeper into the details?
|
This is not a robotics question, but this Stack Exchange is the closest I could find to mechanical engineering. Please refer me to a better place to ask this, if one exists. Hopefully someone might just know this.
I got a pull-back car for my boy at McDonalds, and it has two gears. It starts slow, then speeds up after about two seconds. It's impressive to me, especially given the inherent cheapness of toys sold by McDonalds. It feels solidly built as well.
I couldn't find anything related to this concept. The wiki on pullback motors does not include any information on multiple gears.
Any ideas on how this works?
|
Hello i am building a differential drive robot which is equipped with quadrature encoders on both of the motors. My aim is to be able to predict the heading (yaw angle) of the robot using a kalman filter. I am using an MPU 9150 imu. As of now m just interested in yaw angle and not the roll/pitch. As i understand, i will be needing the z-axis of gyro to be fused with the magnetometer data in order to properly estimate the heading angle. My problem is how do i implement the bias and covariance required for the kalman filter to work. Gyroscope would be my process and magnetometer data would be my update step yeah?. From the datasheet i have found the Angular random walk of my gyroscope to be 0.3 degrees/second for 10 Hz motion bandwidth and a constant bias of 20 degrees/second at room temperature. If i am not mistaken i should include the bias in my state prediction equation?. Also how do i get the covariance matrix Q.
|
A time has come for my robot to get some more permanent computer than my laptop balanced on top of it.
I have selected a mini itx board that can be powered directly from battery and some components that go with it including a wifi card and now I'm thinking about the antenna I will need.
Constraints I have identified so far:
The robot's body is a closed aluminium box, so I think this rules out keeping the antenna inside.
The robot is intended to work outdoors, so it needs to be waterproof.
Vibrations might be an issue.
And the questions:
What parameters should I watch when selecting an antenna?
Is it ok to use indoor stick antenna and seal the mounting point with hot glue?
Does it change anything if the antenna will be sticking out of largish sheet of alluminium?
The robot will also have GPS, is it possible that the two will interact badly under some circumstances?
|
I'm building a robot and powering it with a Raspberry Pi. I am looking at this battery pack, but I am flexible with which one to use.
My problem is that I need to be able to charge the robot while it is still on, and apparently that is not good for a single battery pack to be charging while being used (they seemed to say so in the video). Am I wrong? Otherwise, how could I go about charging the robot while keeping the Raspberry Pi running?
EDIT: This is my first robot (other than my lego NXT kit), so I don't have any prior experience with robot batteries.
|
I'm developing my fligth controller board on Tiva Launchpad for quadcoper and while calibrating PID I discovered an unexpected behaviour: sometimes quadcopter seems to experience random angle errors. While trying to investigate it, I've figured out that my code if fairly trying to compensate tham, as soon as they appear - but do not cause them. Even more - i've discovered that such behaviour appears only when two (or more) motors are adjusted, while one motor system shows pretty good stabilisation.
Here is code for PMW output for different motors:
torque[0] = (int16_t)(+ angles_correction.pitch - angles_correction.roll) + torque_set;
torque[1] = (int16_t)(+ angles_correction.pitch + angles_correction.roll) + torque_set;
torque[2] = (int16_t)(- angles_correction.pitch + angles_correction.roll) + torque_set;
torque[3] = (int16_t)(- angles_correction.pitch - angles_correction.roll) + torque_set;
and here is recorded angles for system with one motor and two motors:
To be sure that it's not the algorithm problem, while recording this angles only Integral part of PID was non-zero, so angles were not even stabilised.
My question is - could esc noise each other (in my quad they are quite close to each other - just few sentimeters away) to cause such behaviour?
Thanks a lot!
|
My set up consists of a brushed motor (ex cordless drill type) connected to a motor controller which is in turn connected to a LIPO battery and an r/c receiver. All my cables are fitted with XT 60 connectors except for the cable that goes to the receiver which is a 3 wire pin (usual white, red and black).
The above set up is one of a pair which I am using in my battle robot. The motors are connected to drive wheels, left and right respectively. the problem is the motors are turning in opposing directions.
For some reason I neglected to switch the polarity of the wires of one motor at the time I attached the XT 60 connectors and I really am not looking forward to re-soldering.
So my question is whether there is any fast way of reversing the direction of rotation without soldering? For instance can the R/C transmitter (a turnigy 9x without any modding)be programmed to switch up for down (hence forward for reverse)?
Or can I maybe switch the pin connector going into the receiver (I don't think so because the ground is probably common, but worth asking just in case I guess).
Any ideas or should I just get soldering?
|
Just to give a bit a context, here are the equations I'm using for the Angular accelerations.
φ** =(1/Jx)τφ
and
θ** =(1/Jy)τθ
So my plant gains would be
φ**/τφ =(1/Jx) along x axis
and
θ**/τθ =(1/Jy) along y axis
The basic PID structure is
Gain=Kp(Desired-measured)+Ki(integral(Desired-measured))+Kd(Differential(Desired-measured)
Lets just say my plant gain for angular accl around x axis is φg and my PID gain is Pg. To obtain a controller, do I do
(φg)(Pg)=open loop gain=L
and for closed loop L/(1+L).
My question is, am I right about what I'm doing and do I upload the algorithm in time domain form or frequency domain form (Silly question as frequency domain is for analysis but my only control experience is purely theory and entirely focused on analysis using root locus and nyquist)
|
All-most all SLAM back-end implementation compute chi2 estimates. Chi2 computation is usually used to compute the best-fitness score of model to the data.
How it is related to optimization framework for SLAM?
Rgds
Nitin
|
** If there's a better stack exchange site for this, let me know--the mechanical engineering one is closed so I'm not sure where else is appropriate **
Do mechanical systems have fuses?
In an electrical system, like a charging circuit for example, we prevent large loads from damaging the system with fuses and circuit breakers. These will disconnect the inputs from the outputs if the load get too high.
Do mechanical systems have a similar mechanism? Something that would prevent gears or linkages from breaking when the load gets too high by disconnecting the input from the output?
|
what's the quickest way to calculate a relative coordinate of a frame, as shown in the code below. The Kuka robot language this ":" is referred to as the geometric operator.
I would like to perform this calculation in matlab, scilab, smathstudio or java, could you please advise on which library to use and/or how to proceed?
Frame TableTop=[x1 y1 z1 a1 b1 c1]
Frame RelCoord=[x2 y2 z2 a2 b2 c2]
Frame AbsCoord= TableTop:RelCoord
|
I have a BBB and it works quite well however when I power it over the Barrel connector or over the vdd pins rather then use the usb connection it doesn't automatically boot. When I put the barrel connector and usb in at the same time and then remove the usb it continues running. This is running on ubuntu arm. I have tested that the power supply is between 4.95 and 5.04 V and is capable of sustaining this to just over 1 Ampere
Edit it appear the BBB does boot when supplied with 4.7 V lab power supply at ~0.4A power consumption. So that suggest something is wrong with the power supply. But how do I test it seeing I was able to verify that it can supply 1A at 5V.
This power supply works by feeding a 12V battery into a step down to convert it down to 5V if that matters.
power-supply power linux beaglebone-black beagleboard
|
A Co-worker said he remembered a 2011(ish) ICRA (ish) paper which used just a touch/bump sensor and odometry to do SLAM. I can't find it - I've paged thru the ICRA and IROS agendas and paper abstracts for 2010-2012, no joy. I found a couple of interesting papers but my coworker says these aren't the ones he remembered.
Foxe 2012 http://staffwww.dcs.shef.ac.uk/people/C.Fox/fox_icra12.pdf
Tribelhorn a & Dodds 2007: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.507.5398&rep=rep1&type=pdf
Background - I'm trying to make a Lego Mindstorms bot navigate and map our house. I also have a IR sensor, but my experience with these is that they are really noisy, and at best can be considered an extended touch sensor.
Regards,
Winton
|
I am studying electronics engineering. I am fond of robots since my childhood. I have small doubt i.e. if I want to get placed in
robotics and automation based company ,what should I must study(reference books/softwares etc) perticularly for cracking an interview ? In simple words,as Electronics Engineer what other specific skills (like embedded C programming etc) should I go through?
|
I read up on the wheels of Curiosity, and also about the suspension. But is there a name for the steering? It looks similar in nature to the front landing gear on an airplane, but searching those terms didn't turn up and answer. I've attached a picture with the area of interest highlighted.
(Image: Gene Blevins/Reuters)
|
I am working on my final project by autonomous Quadcopter. my tasks are to make a quadcopter which should do object avoidance and it should auto land using ultrasonic sensors.
any possible ans to it, how should i connect HC-sr04 ultrasonic sensor to my APM 2.6 board?
Even APm 2.6 has a port I2C.
|
I wanna know how much does Pioneer P3DX cost? nothing mentioned in the website and I don't want to fill out the form regarding this matter.
|
I am looking to build some custom hardware (nothing too fancy, just some motors, cameras and the like), and I need it to be controlled by my laptop (its going to have to do a non-trivial amount of image processing).
Is there a way to attach $n$ motors to a laptop where $n<10$ via USB/e-SATA? It seems like something that should be very easy to solve, but I can't seem to find it anywhere.
I am not looking to get an Arduino/Raspberry Pi, really just connect the motors, and be able to control them individually. I am comfortable adding more power from a second source to supplement the USB power.
Ideas?
|
What does this sentence mean :
"The chassis maintains the average pitch angle of both rockers."
Put in other words, " the pitching angle of the chassis is the average of the pitch angles of the two rocker arms"
What is a pitching angle in this context?
Please explain both pitching angles.
|
Hello I'm a new rc enthusiast,
Is anyone interested in rc's controlled through xbox remotes? The project is to use an xbox one or xbox 360 remote to either hijack a dx3e or dx3c remote or create a transmitter compatible with the spectrum receiver out of the xbox remotes. I've seen applications that use wifi but I'm not sure thats the route I'm looking for. From what I've read there is limited range and signal loss through the wifi network plus It may create a lag larger than what would be desirable in racing. The rc is a losi scte short course race truck. I'm not to savvy with electronic jargon but will study and learn what I can. Thanks for your thoughts.
|
I decided to work on a 2 wheeled robot position mapping problem. For that I have 2 DC motors with encoders from pololu.
I have the following two questions:
Do I need to know the model of each motor in order to tune a controller?
What are the steps/approach used to get a good mapping (let's say 1 mm maximum error)?
|
How can I send a jpeg image to a microcontroller via USART?
|
Given a DH matrix for a set of joints, how would you convert the data into homogeneous transformation matrices for each joint? I've looked online, but can't find a good tutorial.
|
I would like to build a motorized robot with stereo vision that sends visual input to a PC over wifi for processing. A Raspberry Pi would have been perfect for the task if it would be possible to connect 2 cameras. The Pi 2 would be powerful enough for two USB webcams (with minimum 8fps) I guess, but the shared USB seems to be a bottleneck (2 cameras + wifi dongle).
What other options are there to send input from 2 cameras and control a motor (or an arduino)?
|
I am wondering what the use is of two PID loops to control a quadcopter. One PID for stability control and another PID for rate control.
Why can't you just use one PID per axis to control a quadcopter where you take the current and desired angle as input and motor power as the output?
|
I've noticed that the industrial robot arms have very smooth, fast, and strong movement. Does anyone know what type of servos they use. I'm trying to build my own and don't want to have the jerky movement that is seen in a lot of DIY robot arms? Thanks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.