instruction
stringlengths 40
28.9k
|
---|
I have two quaternions that indicate the initial orientation of a four wheel robot, each one in relative to one reference systems.
The robot's orientation given by a quaternion q is not the same in the two reference systems: For one reference system the quaternion q1 might refer to the robot looking at positive x while the same quaternion components q1 in the second reference system might refer to the robot looking at the negative x.
I have two matrices which indicate the position of the robot in time in its correspondent reference system.
I want to find the correspondent points of the first matrix in to the second reference system. Each matrix is built with a different sensor, so the results will be similar but not exactly the same.
I think I should find the transformation from the first reference system to the second and then apply it for each point of the first matrix. How can I find this transformation? The translation part I think is clear, but the rotation not at all.
Edit:
@WildCrustacean
The solution proposed does not solve the problem, I think that the reason is because each system represents the robot in a different way.
In the initial one (A) the robot moving forward with no rotation would increase in the X axis. In the goal referential system (B) the robot moving forward with no rotation would increase in the Z axis. (See figure for more details)
First system (A)
______
|\ T \
| \_____\ z
|B | | : y ^
\ | R | \|
\|____| +--> x
Second system (B)
______
|\ T \
| \_____\ x
|B | | : ^
\ | R | |
\|____| +--> z
\
y
R: Right side
B: Back side
T: Top
I think I have to apply an extra rotation to change the initial quaternion that belongs to the first system to be in accordance with the second system before applying the transformation of your post.
One rotation of 180 degrees around x followed by one of 90 around y. Would rotate from A to B
This is how I tried to solve it:
# Quaternion to adjust reference system
first_quat = make_quaternion(unitary_x, pi) # Generates the quaternion that rotates pi around X
second_quat = make_quaternion(unitary_y, pi/2.0) # Generates the quaternion that rotates pi/2 around Y
composed_fs_q = first_quat*second_quat
# Quaternion to rotate from one reference system to the other
quaternion_ini_A = quaternion_ini_A*composed_fs_q
A2B_quaternion = quaternion_ini_B*(quaternion_ini_A.inverse())
A2B_quaternion is the quaternion that I use for the rotation but still doesn't perform the right rotation. Any idea?
|
I'm building a motion detection and object recognition camera with feedback control for a hexy robot. Fortunately most of the servo control is handled by the analog servo controls and the high-level logic can be implemented in python on a raspberry pi. What's the right combination of python modules to implement:
a daemon/service to trigger and execute image capture and processing
a daemon/service to regularly update the hexy with the latest motion plan and servo setpoints
the image processing for recognition and tracking of objects from the webcam
I'm currently using python-daemon for the services and comparing the various pypi opencv libraries to see if any of them look promising. Anyone have experience with these on a raspberry pi or ARM processor in a robotics application?
remotecv - remotecv is an OpenCV server for face recognition
ctypes-opencv - ctypes-opencv - A Python wrapper for OpenCV using ctypes
pyopencv - PyOpenCV - Boost.Python and NumPy
opencv-cython - An alternative OpenCV wrapper
CVtypes - Python OpenCV wrapper using ctypes
Tippy - another Toolbox for Image Processing, based on OpenCV
These each depend on a deep list of low-level libraries and/or compilers like Boost->numpy->gfortran or cython->gcc or ctypes. I'm concerned about compatibility and performance of these lowlevel libraries on Raspbian and an ARM processor.
Anyone with a known working architecture for image processing and real-time control in python on an ARM processor will get their answer upvoted and/or accepted.
|
Are there any decent python numerical package libraries besides numpy for python? Numpy relies on gfortan which itself must be compiled correctly for your platform to avoid hidden/insidious numerical errors in numpy.
I need a matrix algebra package to do kinematics, path planning, and machine learning in python that isn't sensitive to gfortran version and compiler options.
|
Shall I filter (kalman/lowpass) after getting the raw values from a sensor or after converting the raw values to a usable data? Does it matter? If so, why?
Example:
Filter after getting raw values from IMU
or
filter after converting raw values to a usable data eg. flight dynamics parameters.
|
The definition of a robot is as follow: "A robotic paradigm can be described by the relationship between the three primitives of robotics: Sense, Plan, and Act."
An example could be the famous "Kuka Robots". The Kuka robot is preprogrammed and does mainly one loop over and over again. Some of them could have measurement sensors but that is all. They do not think or plan nor do they make decisions.
An automatic door opener, used in a building is not a robot either but according to the robotic paradigm definition they are more a robot than a Kuka machine. They actually get some data from a sensor followed by planning and acting.
So why are Kuka machines called robots?
|
I am trying to implement a mechanism to make robots avoid being too close (Say in a distance less than d). I am not familiar with those systems and I have to implement a strategy to avoid robots being too close to each other. Could anyone recommend me some readings for such a problem or a set of keywords to search for? I don't know yet how to start.
|
I was looking up the motor parameters for some stepper motor where they listed the torque of the motor at different current/voltage but the torque they listed was in kg/cm.
How is kg/cm even a remotely acceptable unit for torque?
How do I calculate the torque in Nm from kg/cm?
Clarity note: Its not kgcm which represents [0.098 kilogram force = 1 Nm.]
Website where this happens.
|
I am currently reading into the topic of stereo vision, using the book of Hartley&Zimmerman alongside some papers, as I am trying to develop an algorithm capable of creating elevation maps from two images.
I am trying to come up with the basic steps for such an algorithm. This is what I think I have to do:
If I have two images I somehow have to find the fundamental matrix, F, in order to find the actual elevation values at all points from triangulation later on. If the cameras are calibrated this is straightforward if not it is slightly more complex (plenty of methods for this can be found in H&Z).
It is necessary to know F in order to obtain the epipolar lines. These are lines that are used in order to find image point x in the first image back in the second image.
Now comes the part were it gets a bit confusing for me: Now I would start taking a image point x_i in the first picture and try to find the corresponding point x_i’ in the second picture, using some matching algorithm. Using triangulation it is now possible to compute the real world point X and from that it’s elevation. This process will be repeated for every pixel in the right image.
In the perfect world (no noise etc) triangulation will be done based on
x1=P1X
x2=P2X
In the real world it is necessary to find a best fit instead.
Doing this for all pixels will lead to the complete elevation map as desired, some pixels will however be impossible to match and therefore can't be triangulated.
What confuses me most is that I have the feeling that Hartley&Zimmerman skip the entire discussion on how to obtain your point correspondences (matching?) and that the papers I read in addition to the book talk a lot about disparity maps which aren’t mentioned in H&Z at all. However I think I understood correctly that the disparity is simply the difference x1_i- x2_i?
Is this approach correct, and if not where did I make mistakes?
|
And if so, what was the highest score so far?
Some news articles suggest only parts of tests were aced.
Update since people censored this question and closed it. There was an AI that has taken an IQ test and scored similar to a 4 year old.
http://phys.org/news/2015-10-ai-machine-iq-score-young.html
The AI system which they used is ConceptNet, an open-source project run by the MIT Common Sense Computing Initiative.
Results: It scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds
Abstract
We administered the Verbal IQ (VIQ) part of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) to the ConceptNet 4 AI system. The test questions (e.g., "Why do we shake hands?") were translated into ConceptNet 4 inputs using a combination of the simple natural language processing tools that come with ConceptNet together with short Python programs that we wrote. The question answering used a version of ConceptNet based on spectral methods. The ConceptNet system scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds. Large variations among subtests indicate potential areas of improvement. In particular, results were strongest for the Vocabulary and Similarities subtests, intermediate for the Information subtest, and lowest for the Comprehension and Word Reasoning subtests. Comprehension is the subtest most strongly associated with common sense. The large variations among subtests and ordinary common sense strongly suggest that the WPPSI-III VIQ results do not show that "ConceptNet has the verbal abilities a four-year-old." Rather, children's IQ tests offer one objective metric for the evaluation and comparison of AI systems. Also, this work continues previous research on Psychometric AI.
Update. A robot has passed the Japanese college entrance test and has an 80% chance of being accepted. Since it scored more than the average, that would make the IQ > 100, especially since college applicants have an IQ greater than average, and especially since Japanese are smarter than average humans. http://gizmodo.com/an-ai-program-in-japan-just-passed-a-college-entrance-e-1742758286
The Wall Street Journal reports that the program, developed by Japan’s National Institute of Informatics, took a multi-subject college entrance exam and passed with an above-average score of 511 points out of a possible 950. (The national average is 416.) With scores like that, it has an 8 out of 10 chance of being admitted to 441 private institutions in Japan, and 33 national ones.
|
From an old dust buster I've got this electro motor, the included battery pack and the charger:
I ripped everything apart (the dust buster was broken) and the motor still works. After playing around with it for a while and letting it lying around for about two weeks it suddenly revs a lot slower. I supposed the battery pack was drained so I hooked up the battery pack to the charger and let it charge for a night. Unfortunately the motor still turns very slow.
Since I want to use this motor for my first home robotics project (making a kite fly with my computer), off I went to the local electronics store where they measured the charger to give 16V (even though it says 21V) and the battery pack to give about 5V. I then hooked up the motor directly to the charger, but unfortunately it doesn't even move an inch then.
So now I wonder:
Why doesn't the motor spin at all when hooking it up to the charger? (Could that be because the 250mA is too low?)
Why doesn't the battery pack charge at all? (this bothers me the most!)
All tips are welcome!
|
I am trying to find a joint like these for a robot I'm building. It is often called a swivel joint or a universal joint, but with a modified spider. I can't find one anywhere and would prefer not to make it. Searching for 'universal joint' returns the standard automotive type. Any help would be appreciated
|
This is a follow-up to this question: Prototyping a device with 25-100 small DC 3.0V motors, is Arduino a good fit?
I've decided based on the answer that sending the control signals through multiple TLC5947 chips, then sending the PWM signal to the motors is the best way to go. What I need to know is how to turn the PWM signals into something of the required power, since the TLC5947's won't be able to drive the motors by themselves.
I'm guessing an amplifier is what I'll need to make, but what's the best way to boost that many signals?
|
I'm trying to handle food grains like rice, wheat in an automated way (to cook simple dishes). For this I have to transfer grain from a larger container to a weighing scale. I know I can use solenoid valves for liquids but all solid handling valves seem to be too big (gate valves etc) and for larger applications. Is there any better way to do this ?
|
The dynamic programming algorithm refers to the Bellman equation. An open-loop control decides movement at the initial point while a closed-loop control decides control during the movement. Now most robotic application looks like closed-loop control: in every point, it checks how it is doing with respect to some reward function, this is my thinking. Now most participants in threads such as How mature is real-time programming in robotics? do not differentiate their scope, perhaps they haven't thought about it. Anyway, I am interested to know:
How is dynamic programming used in robotics? Is there any research about DP usage in robotics?
|
FPGA has good points such as a lot of IO points but then again you need to think things on very low level with flip-flops and pioneer on areas where things are not yet mature -- for example see this question here about development-tools on FPGAs -- this is my understanding currently! Now FPGA has been used to create excellent dexterity in robotic hands like here. Now some people market FPGA for fast prototyping and "forward looking" designs like here, I don't fully understand them: if you don't need a lot of IO points for things such as sensors, why to choose FPGA for a robot? So
When should FPGA be chosen for a project in robotics?
|
Some vector math is involved here so prepare yourself.
I am developing a robotic arm that moves in two dimensions. It is a rotary-rotary design which looks roughly like the picture in this post:
Building Robotic arm joint
I am now trying to limit the speed of the end-effector. I am using Simulink and believe that the best way to limit the speed is the limit the rate of change of the X and Y coordinates that I tell it to move to.
Now, I also want the end-effector to be able to move in a straight line and believe that I can accomplish this by defining functions that calculate the maximum rate for movement in the X or Y direction based on the distance the arm is trying to travel. The equasion I came up with is this :
xRate = (abs(currentX - nextX) / max(abs(currentX - nextX), abs(currentY - nextY))
yRate = (abs(currentY - nextY) / max(abs(currentX - nextX), abs(currentY - nextY))
So basically, XRate: distance in x / max between distance in X and distance in Y.
Now, for the actual problem. Because this limits the speed in both X and Y, the end-effector can travel (for instance) 1 in./sec in both directions at the same time. Meaning that it is travelling at OVER 1 in./sec overall. If, however, it is only moving in ONE direction then it will only move at that 1 in./sec speed because there is no second component. It boils down to the fact that the max speed the arm can move is 'sqrt(2)' and the minimum is '1'.
My main question is: Given that I need to calculate a max xRate and a max yRate, how can I limit the overall speed of the end-effector?
Secondarily, is there a way for me to implement a rate control that will limit the overall rate instead of limiting X and Y independantly using Simulink?
|
I have an old gamecube that doesn't work and I want to gut it and fill it with Arduino boards and/or Raspberry Pi if necessary. I want the project to eventually have some kind of AI aspect, but I'm also toying with the idea of using a wireless GameCube remote and wavebird to issue commands at the push of a button.
I guess this would be mostly good for testing purposes, but I'm mostly curious if and how I would go about making my RaspberryPi understand Gamecube remote input. Furthermore, would this kind of idea be feasible?
|
I'm building a quadcopter and I've seen that a Li-Po battery must not be entirely discharged, otherwise it could damage it.
How do you know when you have to stop your quadcopter or robot in order to prevent damages, since the voltage doesn't drop? Which part should control the battery charge? ESCs? BEC? Flight controller?
|
I'm building an arduino controlled pump system to be able to move fluids around. I need this to be fairly accurate, but extreme precision isn't required. Since there will be a variety of liquids moved through the pump, I've determined a peristaltic pump the best fit. But I don't think I fully understand them, and had a few questions..
Since I'll need to purge the system... Can a peristaltic pump push air? Let's assume you have a 2m of tubing, and you pump a bunch of water through it. Can you remove the tube from the water reservoir so it is open to the air, and effectively purge the system of any remaining water?
Since I want to fairly accurately measure flow, could I simply count milliseconds instead of using a flowmeter? ... Will a peristaltic pump ALWAYS pump at a constant rate, regardless of the viscosity of the fluid? That is, will maple syrup come out at the same rate as water?
Shopping question, ignore I suppose ... Anyone know where I may find a fast/high flow peristaltic pump? I'm looking to be able to pump, at a minimum, .5oz/sec
Would be determinant upon #3 ... What sort of relay would I want for toggling this on/off with an arduino?
|
I'm starting out with Gazebo (1.5) at the moment and am following a tutorial off the internet. In order to get Gazebo to find the model, the author advocates manually exporting the GAZEBO_MODEL_PATH environment variable via
export GAZEBO_MODEL_PATH=[...]/models:$GAZEBO_MODEL_PATH
But that will only work for the current terminal. So I wanted to change the environment variable permanently.
The Gazebo User Guide claims that GAZEBO_MODEL_PATH, along with all the other environment variables, is set by /usr/share/gazebo-1.5/setup.sh but my (virgin) Gazebo install doesn't list it:
export GAZEBO_MASTER_URI=http://localhost:11345
export GAZEBO_MODEL_DATABASE_URI=http://gazebosim.org/models
export GAZEBO_RESOURCE_PATH=/usr/share/gazebo-1.5:/usr/share/gazebo_models
export GAZEBO_PLUGIN_PATH=/usr/lib/gazebo-1.5/plugins
export LD_LIBRARY_PATH=/usr/lib/gazebo-1.5/plugins:${LD_LIBRARY_PATH}
export OGRE_RESOURCE_PATH=/usr/lib/i386-linux-gnu/OGRE-1.7.4
# This line is needed while we're relying on ROS's urdfdom library
export LD_LIBRARY_PATH=/opt/ros/fuerte/lib:${LD_LIBRARY_PATH}
But when I start Gazebo, GAZEBO_MODEL_PATH is already set to $HOME/.gazebo/models, so it must be set somewhere. I guess I could probably simply add GAZEBO_MODEL_PATH to the setup.sh script, but since it is set somewhere, I'd still like to know where and whether it is better practice to set it in there.
|
See the video below of my balancing robot.
Balancing robot
I was having trouble getting it to balance on hard surfaces but finally got it after playing with the PID gains a lot. Previously it was balancing just fine on carpet.
I set the PID gains by just picking a Kp, then increasing Ki until the robot oscillated very badly and tried to smash it's self into the ground, then increasing Kd until it was finally stable. Here's the gains I'm using in the video.
Kp=20, Ki=4.5, Kd=45;
It will sit in one spot balancing without any problem. You can see in the video that it can even stop from falling after I give it a pretty good kick. The problem is that it stops from falling over but then greatly overshoots the other direction. In the video you can see when I give it just a small tap it runs the other way for a while before finally becoming stable again.
Any suggestions on what to try next?
|
In a quadrotor we need to change each motor's speed depends on its position in space. More frequency will result more stability ( I mean if we can change motor's speed 400 times per second instead of 100 times per second we may stabilize our UAV quadrotor far better ).
Now my question targeting people who made a UAV quadrotor before or have any information about ESCs. I wanna know whats the minimum refresh rate for ESCs in a quadrotor to make it stable ? For example may an ESC with 50hz refresh rate enough for stabilizing quadrotor or not ? I'm asking this question because high speed ESCs are more expensive than lower speed ones.
I have this one. May it work ?
|
I need a microcontroller that can process minimum 2mb data per second.
How do I determine what processors will be able to do this?
Also how can I calculate the processing speed in per second of any microcontroller?
I am very much scared with my college project and I need help.
|
I read many sources about kalman filter, yet no about the other approach to filtering, where canonical parametrization instead of moments parametrization is used.
What is the difference?
Other questions:
Using IF I can forget KF,but have to remember that prediction is more complicated link
How can I imagine uncertainty matrix turning into an ellipse? (generally I see, area is uncertainty, but I mean boundaries)
Simple addition of information in IF was possible only under assumption that each sensor read a different object? (hence no association problem, which I posted here
|
I want to fuse objects coming from several sensors, with different (sometimes overlapping!) fields of view. Having object lists, how can I determine whether some objects observed by different sensors are in fact the same object? Only then I can truly write an algorithm to predict future state of such an object.
From literature I read those 4 steps:
Plot to track association (first update tracks estimates and then associate by "acceptance gate" or by statistical approach PDAF or JPDAF)
Track smoothing (lots of algorithms for generating new improved estimate, e.g.: EKF, UKF, PF)
Track initiation (create new tracks from unassociated plots)
Track maintenance (delete a track if was not associated for last M turns. also: predict those tracks that were associated, their new location based on previous heading and speed)
So basically I am questioning point 1, acceptance gate. For a single sensor I can imagine it can be just a comparison of xy position of object and sensor measurement, velocity with heading eventually. My case is however, I have already ready object lists from each sensor in every cycle, there are some algorithms how to merge informations about an object collected by different sensors (great source is e.g. here: http://www.mathworks.de/matlabcentral/fileexchange/37807-measurement-fusion-state-vector-fusion), but question is how to decide which objects should be fused, and which left as they were? Fields of view may overlap partly, not totally.
|
These days, one often hears of cyber-physical systems. Reading on the subject, though, it is very unclear how those systems differ from distributed and/or embedded systems. Examples from Wikipedia itself only make them look more like traditional distributed systems. For example:
A real-world example of such a system is the Distributed Robot Garden at MIT in which a team of robots tend a garden of tomato plants. This system combines distributed sensing (each plant is equipped with a sensor node monitoring its status), navigation, manipulation and wireless networking.
Obviously, any distributed system consists of sensing, actuations (which can easily include navigation) and networking.
My question is, how exactly does cyber-physical systems differ from traditional distributed systems? Is it just a fancy name, or is there something considerably different with it?
|
I'm planning on programming a prebuilt robot to solve a maze as fast as possible. The robot has forward obstacle sensors (no side sensors) and 3-axis accelerometer. I'm planning on using the wall following algorithm. Is this the fastest possible algorithm? Also, since there are no side sensors, the robot needs to continuously turn to check if there is a wall on its side, so is there a clever way to use the accelerometer and sensors?
|
I am looking for A3 size poster of Mars Exploration Rover Spirit/Opportunity for robotic education.
www.sunstartoys.com gives a little postcard-size of the MER along with its components on board when you buy the toy. But this is not large enough for classroom purpose.
Does anyone know where to buy A3 size poster of these MER for robotic education?
|
I'm a newbie in robotics, and I'm doing a project on dynamic Braille interface. Basically it's a 8*8 array of pins, which can be either totally up or down. How to use least motor as possible?
I'm thinking of using Arduino for easy interface with computer.
|
I am looking for a specific name of the wire used for the robotic arm movement control and where can I find some of this online. I want to control it using the micro controller so please suggest some good development kit.
|
We have an air bearing for a planar xy motion. Today it consists of four pockets according to picture.
In the current design there are no sealings around the peripheries of the pockets and we suspect that is the reason we get vibrations.
In the current design we control the pressure, same for all for recesses. The flow is adjustable individually for each recess. In practice it is very hard to tune it.
For the non recess surfaces we have used Slydway as we need to be able to operate it without pressure occasionally.
To try to solve the problem we plan to develop a prototype where we can try out the effect of using sealings around the periphery of the pockets. The idea is something like this:
Questions
Is the idea with adding sealings good? (sanity check)
Suggestions for sealings? (I'm thinking a porous material like felt or cigarette filter)
Of course all suggestions are welcome.
Edit
I'm going to try and add grooves around the recesses to evaquate the air that leaks. My thinking is that this will give us a more defined area under pressure.
|
I would like to prevent a shaft from being pulled through it's bearings - that is, press a plastic ring around it on either side. What are these rings called? They're not bearings or hubs. And where can I find them?
|
I have a 9 channel RF RX/TX and want to connect 3 motors to it. I am able to connect channel 1 with motor 1 but unable to connect channel 2 with motor 2 simultaneously with arduino.
Here is the code I am currently using:
int motor1Left = 7;// defines pin 5 as connected to the motor
int motor1Right= 9;// defines pin 6 as connected to the motor
int motor2Left = 22;// defines pin 7 as connected to the motor
int motor2Right = 26;// defines pin 8 as connected to the motor
int enable = 5;
int enable2 = 10;
int channel1 = 2; // defines the channels that are connected
int channel2 = 3;// to pins 9 and 10 of arduino respectively
int Channel1 ; // Used later to
int Channel2 ; // store values
void setup ()
{
pinMode (motor1Left, OUTPUT);// initialises the motor pins
pinMode (motor1Right, OUTPUT);
pinMode (motor2Left, OUTPUT);
pinMode (motor2Right, OUTPUT);// as outputs
pinMode (channel1, INPUT);// initialises the channels
pinMode (channel2, INPUT);// as inputs
Serial.begin (9600); // Sets the baud rate to 9600 bps
}
void loop ()
{
Channel1 = (pulseIn (channel1, HIGH)); // Checks the value of channel1
Serial.println (Channel1); //Prints the channels value on the serial monitor
delay(1000);
Channel2 = (pulseIn (channel2, HIGH)); // Checks the value of channel1
Serial.println (Channel2); //Prints the channels value value on the serial monitor
delay(1000);
if (Channel1 > 1470 && Channel1 < 1500) /*These are the values that I got from my transmitter, which you may customize according to your transmitter values */
{
digitalWrite (motor1Left, LOW); // Sets both the
digitalWrite (motor1Right, LOW);// motors to low
analogWrite(enable, 100);
}
if (Channel1 < 1460) // Checks if Channel1 is lesser than 1300
{
digitalWrite (motor1Left, HIGH);// Turns the left
digitalWrite (motor1Right, LOW); // motor forward
analogWrite(enable, 100);
}
if (Channel1 > 1510) // Checks if Channel1 is greater than 1500
{
digitalWrite (motor1Left, LOW);// Turns the right
digitalWrite (motor1Right, HIGH);// motor forward
analogWrite(enable, 70);
}
if (Channel2 > 1480 && Channel1 < 1500 )
{
digitalWrite (motor2Left, LOW);// Sets both the
digitalWrite (motor2Right, LOW);// motors to low
analogWrite (enable2, 100);
}
if (Channel2 < 1300) // Checks if Channel2 is lesser than 1300
{
digitalWrite (motor2Left, LOW);// Turns the left
digitalWrite (motor2Right, HIGH);// motor backward
analogWrite (enable2, 100);
}
if (Channel2 > 1500) // Checks if Channel2 is greater than 1500
{
digitalWrite (motor2Left, HIGH);// Turns the right
digitalWrite (motor2Right, LOW);// motor backward
analogWrite (enable2, 100);
}
}
|
Today was my quadcopter's first "flight". I'm running megapirate on a Crius AIOP v2 with a Turnigy Talon v2 frame.
I only touched the throttle stick on my remote, nothing else. When I felt the quadcopter was about to take off, I pushed the throttle just a little bit more, and the quadcopter oscillated 2 or 3 times and the just flipped over, landing on the propellers.
So, I broke 2 props, my frame feels a bit loose, I'll probably have to tighten the screws (I hope...). How can I tune the software so it will stabilize nicely after take off?
Edit :
I don't know if it was true oscillation or just random air flows making it unstable. I made some more tests yesterday and it was quite OK (even if I crashed a few times). This time, it was really oscillating but it was quite windy outside and the quadcopter managed to stabilize after all. So i'll probably have to tune my PIDs and find a way to do it without crashing.
Edit 2 : After some PID tuning, I managed to stabilize my quadcopter pretty well but it's still oscillating just a little bit. I guess I'll have to slightly change the values to get a perfect stabilization.
|
I have a small device that's picking up small rocks from a pile and moving them to another place. Its a kind of crude way of trying to push the whole pile onto a bigger gear and hoping one of them is pushed to one of the spaces between gears and taken around and falls off on the other side of the spinning gear. Here I want to know if the machine successfully got a rock here, if not it should spin the gear until it turns up a single rock on the other side of it. If a rock is present at the spot, the gear should stop spinning until the rock is taken care of by the rest of the machine.
What kind of device can I use to sense if I successfully succeeded in getting a rock on the other side of the gear?
This is just a part of a bigger system. To sum up, I need the sensor to signal when a rock is signaled out and separated from the rest so it can continue work on that single rock.
I am building this using an Arduino to move the gear around, so the sensor needs to be something that can be controlled by an Arduino.
|
In order to build and operate a space elevator moving crafts and people into space, there are two big challenges that have not been solved yet:
Finding a cable with enough tensile strength,
Moving stuff along the cable at a reasonnable speed.
Apart from those two ones, what are the other technical challenges to solve, especially things that do not exist yet in robotics, and need to be invented?
|
This is part two of my larger robot, it follows up what happens with the small rocks here: What kind of sensor do i need for knowing that something is placed at a position?
Now i am taking the rocks down a tube for placement. In the case they need to be altered so they always will stand up before they enter the tube. Obvioulsy a rectangular rock wont fit if it comes in sideways. The dimensions here are pretty small. The rocks are about 15 mm x 10 mm. The tube i use is actually a plastic drinking straw. And the material i use for the rest of the robot is Lego powered by step motors which draw the conveyor belts to move the rocks. The control is Arduino.
(sorry for the lousy illustration, if you know a good paint program for mac like the one used to draw the picture in my other post, please tell me :-))
The rocks will always enter one at a time and have as much time they need to be adjusted to fit and enter the tube so the fall down. The question is, how to ensure all rocks are turned the right way when they get to the straw. Im not sure if using Lego when building the robot is off topic here, but a solution involving lego is preferable. And it has to be controlled by an Arduino.
General tips in how to split a complex task into subtasks robots can do is also good, is there any theory behind the most common sub tasks a job requires when designing multiple robots to do it?
|
Imagine a "drone" and a target point on a 2d plane. Assuming the target is stationary, there are eight parameters:
P = my position
Q = target's position
V = my velocity
I = my moment of inertia
w = my angular velocity
s = my angular position
T = max thrust
U = max torque
The drone's job is to get to the target as fast as possible, obeying max torque and max thrust. There are only two ways to apply the torque, since this is only in a 2d plane. Thrust is restricted to only go in one direction relative to the orientation of the craft, and cannot be aimed without rotating the drone. Neglect any resistance, you can just pretend it is floating around in 2d outer space. Let's say the drone checks an equation at time interval t (maybe something like every .01 seconds), plugs in the parameters, and adjusts its torque and thrust accordingly.
What should the equations for thrust and torque be?
What have we tried?
We know that the time it takes for the drone to reach the target in the x-direction has to be the same for the same time in the y-direction. There is going to have to be some integral over time in each dimension to account for the changing thrust based on total thrust, and total thrust in each direction given the changing angular position. I have no idea how to tie the torque and thrust together in a practical way where a function can just be called to give what thrust and torque should be applied over the interval t unless there is some other technique.
|
We are using ArduIMU (V3) as our Quadrotor's inertial measurement unit. (we have a separate board to control all motors, not with ArduIMU itself).
As mentioned here , the output rate of this module is only at about 8hz.
Isn't it super slow to control a quadrotor ? I'm asking because as mentioned in this answer a quadrotor needs at least 200hz of control frequency to easily stay in one spot, and our ESCs is configured to work with 450hz of refresh rate. Any working PID controller I saw before for Quadrotors used at least 200-400hz of control frequency.
I asked similar question before from Ahmad Byagowi (one of the developers of ArduIMU ) and he answered:
The arduimu calculates the dcm matrices and that makes it so slow. If
you disable the dcm output, you can get up to 100 hz gyro, acc and so
on.
So, what will happen if I disable DCM from the firmware ? Is it really important ? We did a simulation before and our PID controller works pretty well without DCM.
|
I'm in the planning stages for a project using the Arduino Uno to control 8 distance sensors, and have run into a little road block, the Uno only has six input pins. So I'm wondering, is there any way for this to work? If so, how?
|
I have a robot with two wheels/motors and each has a quadrature encoder for odometry. Using the wheel/motor/encoder combo from Pololu, I get 48 transition changes per rotation and my motors give me a max of 400RPM. I've found it seems to miss some of the encoder state changes with the Pololu wheel encoder library.
Would I run into issues or limitations on my Arduino Uno using interrupts to track the quadrature encoders while using PWM to drive my motors through an H-bridge chip?
|
I would like to create an Arduino based robot with 2 wheels, quadrature encoders on each wheel, a H-bridge driver chip (or motor controller) and a caster. I want to use the PID library to ensure the speed is proportional to the distance to travel.
At a conceptual level, (assuming the motors do not respond identically to PWM levels) how can I implement the PID control so that it travels in a straight line and at a speed proportional to the distance left to travel?
|
How does rocker-bogie mechanism keep the body flat / keep the solar panel almost flat all the time? I know there is an differential system that connect both rocker bogie (left and right) together. But how does it actually work?
Edited: Please provide relevant references.
|
So I have a quadrocopter, it does come with a remote but I intend to run certain modifications to the copter, like installing a camera, a mechanical manipulator, and other random modifications. The remote that comes with the copter isn't flexible enough to help with such functions and plus it lacks any more buttons.
I was wondering if I could somehow program the quadrocopter to respond to my Xbox controller. I was planning on using my laptop's Bluetooth connection to talk to copter. The Xbox controller which is connected to the computer would be then used to control the quadrocopter. So my question is, how exactly do I program the controller? How do I go about making all of this possible?
I understand this question is really vague and that there are too many options out there, but I do need help figuring this out.
|
Expanding upon the title, I am querying the use of robotic skeletons to augment human strength and speed. If such a robot had the capacity for example to bear weight 5 times heavier than the wearer and move its robotic limbs twice as fast as the wearer, is there not a danger because such powerful and sharp movements could break their bones and seriously injure them because it moves beyond their human capabilities?
The robots means of producing movement I would think is important here but unsure how so. The nature of passive or actively powered movement and when each mode is used will also determine performance of the exoskeleton. I am not well versed in this area so will appreciate any feedback.
|
While doing a literature review of mobile robots in general and mobile hexapods in particular I came across a control system defined as "Task level open loop" and "Joint level closed loop" system.
The present prototype robot has no external sensors by
which its body state may be estimated. Thus, in our simulations and experiments, we have used joint space closed
loop (“proprioceptive”) but task space open loop control
strategies.
The relevant paper is A simple and highly mobile hexapod
What is the meaning of the terms "joint-level" and "task-level" in the context of the Rhex hexapod?
|
I have tried following a number of guides on the internet but most of them fall down as libfreenect does not exist in opkg, which is the apt-get of Angstrom. Has anyone got it working and if so what is the method?
|
I've been watching too much How It's Made, and I've been wondering how they build devices that spray/inject/dispense a finite amount of liquid (to within some amount of error). I wanted to try this for a hobby project. I'm working on that dispenses dry goods in the amount I specify.
Do I use some kind of special nozzle/valve which can open and close at high speeds? How can I dispense a known quantity from a reservoir of a fluid substance onto each individual unit passing along an assembly line, or an amount specified by the user into another container?
|
The latest OSX documentation I found on the website is from 2011, and the latest build is from over a year ago. I'm a complete n00b to all things ROS and wanted to start playing with it. What is the easiest way?
Edit: this version of the installation instructions is more recent (April 2013), but it says that
OSX is not officially supported by ROS and the installation might fail for several reasons. This page does not (yet) contain instructions for most higher level ROS packages, only for the base system. This includes the middleware and command line tools but not much more.
"Does not contain instructions" also means it doesn't work? What do OSX users who work on ROS usually do? Run it on an Ubuntu VM? Install it just fine on their own on OSX, even though there aren't detailed instructions on the website?
|
Let's think of the following situations:
You are teaching a robot to play ping pong
You are teaching a program to calculate square root
You are teaching math to a kid in school
These situations (i.e. supervised learning), and many others have one thing (among others) in common: the learner gets a reward based on its performance.
My question is, what should the reward function look like? Is there a "best" answer, or does it depend on the situation? If it depends on the situation, how does one determine which reward function to pick?
For example, take the following three reward functions:
Function A says:
below a certain point, bad or worse are the same: you get nothing
there is a clear difference between almost good and perfect
Function B says:
you get reward linearly proportional to your performance
Function C says:
if your performance is bad, it's ok, you did your best: you still get some reward
there is not much difference between perfect and almost good
Intuitively, I'd think A would make the robot very focused and learn the exact pattern, but become stupid when dealing with similar patterns, while C would make it more adaptable to change at the cost of losing perfection.
One might also think of more complex functions, just to show but few:
So, how does one know which function to pick? Is it known which behavior would emerge from (at least) the basic A, B and C functions?
A side question is would this be fundamentally different for robots and human kids?
|
How can I detect when a stepper motor has stalled?
A google search led me to some people who say that
when the stepper motor stalls, the current spikes up,
which is easily detectable with a Hall sensor.
(Or, I suppose, by any of the other current sensors mentioned at
"How can I sense the motor's current?"
).
However, I measured the current through (one of the 4 wires of) my stepper motor,
and it's always within a few percent of 0.5 A, whether my stepper driver is holding one position, moving it normally (which in my application is very slowly), or the stepper driver thinks it is telling the stepper to move normally, but the motor has pegged out against the hard limit.
Measuring the current in the +12V power supply going to the stepper motor driver, also seemed to give a fairly constant current.
This may be because I turned down the current limit to that amount on my "chopper" stepper motor driver.
Am I missing some key detail in the "measure the current" approach?
A google search led me to some other people that measure the back-EMF (BEMF) in one coil of the stepper during the time the stepper driver is only driving the other coil.
But that only seems to distinguish between "a motor moving quickly" vs "a motor stopped", and doesn't seem to distinguish between my case of "a motor moving slowly" vs "a motor stopped".
Is there some way to apply the BEMF approach even in a system where I always drive the stepper slowly, and never spin it quickly?
I'm currently using a stepper driver board with the TI DRV8825 chip on it, and I hoped the "fault" pin would tell me when the stepper motor has stalled against my hard stop.
But it doesn't seem to be doing anything -- is it supposed to tell me about a stall, but I just have it wired up wrong?
Is there some other chip or drive technique that detects when the stepper has stalled out against the hard stop?
Is there some other technique for detecting a hard stall that I can "add on" to a system using an off-the-shelf stepper motor driver?
(Is there some other StackExchange site that is more appropriate for questions about motors and motor drivers?)
|
I bought my kid a robotics kit with several motors and an infrared remote control (you can steer the robot using IR remote control).
Now I want to take it to the next level and control the robots from a PC or a Raspberry Pi.
What is the simplest approach to do this?
I am thinking about 2 possible ways:
Find out the protocol the existing remote control uses and then emulate the IR signals using Arduino (Arduino is sending the IR signals).
Find a piece of hardware, which presses the buttons on the remote control and control it via to Arduino (Arduino is sending signals to the button pushers, the remote control is sending the IR signals to the robot).
|
I have a three wheeled vehicle in a tricycle configuration attached to a fixed frame. Each wheel is powered by an AC electric motor. The AC motors are fed by motor controllers that take a speed demand. The single main wheel (which is also steerable) has a lower gear ratio than the rear wheels so it has a theoretical higher top speed.
When the vehicle drives in a straight line each of the motor controllers are given identical speed requests. Unfortunately feedback from the controller indicates that some motors are pushing while some are pulling. In particular we have a common scenario where one rear wheel is pushing while the front wheel is trying to slow down. The third wheel will often have almost no current.
What can be done to make all three motors work together and avoid situations where they fight? Is there a way to change the request to the motor controller to encourage the drives to work together? Do we have to switch from a speed request setup to a current control setup? If so what is the appropriate way to control the motors then?
Let me know if I haven't included any important details and I will update my question.
|
I am developing a robot which paints using an airbrush (3D painting). I intend to use several colors as a CMYK printer, but I do not know how to do the conversion of RGB colors in the computer to the dosage of colors in CMYK.
|
I am interested in building a robot like the EZ-B, sold by ez-robot.com. It comes with an SDK for Visual Studio and has direct scripting in runtime through a USB, Bluetooth, Wi-Fi, IRC or HTTPS connection.
If I get a regular Arduino board, will I be able to control it remotely in the same way? From what I've read, an Arduino needs to hold the instructions in its own memory, but I would rather have the brain in the computer, feeding signals back and forth to the microcontroller.
Also, is Arduino alone, a step down as the website niceley puts it?
|
I noticed that some IMU units are tuned to be sensitive to small changes, other to large changes and some that can be adjusted between different sensitivities. I am familiar with the use of a Kalman filter to normalize readings, but I was wondering if my UAV could benefit from a second IMU where the two are set at high and low sensitivities to get even more accurate and timely information.
|
I have arduino code for operating 2 servos, but we are using 4 servos and am having trouble getting the other 2 to talk.
The program so far as I can make out is that the angles for the servos that are calculated by the processing side are being sent out one after the other (shoulder, elbow, wrist, wrist2) then repeated. The arduino program gets this data and stores in into an array and then is written to the pin of the appropriate array segment. So 0 is shoulder, 1 is elbow, 2 is wrist and 3 is wirst2.
I can easily get 2 servos to run with no problem. But when I try and add 1 or 2 more we get no response. Can anyone please help me to get the other 2 servos to work? My knowledge on this code is rather limited, so any help is appreciated.
processing Data being sent to the arduino:
byte out[] = new byte[4];
out[0] = byte(shoulderAngle);
out[1] = byte(elbowAngle);
out[2] = byte(wristAngle);
out[3] = byte(wrist2Angle);
port.write(out);
Arduino Code:
#include <Servo.h>
//Declares the servos.
Servo shoulder;
Servo elbow;
Servo wrist;
Servo wrist2;
//Setup servo positions.
int nextServo = 0;
int servoAngles[] = {0, 0};
//Define pins for each servo.
void setup()
{
shoulder.attach(50);
elbow.attach(51);
wrist.attach(52);
wrist2.attach(53);
Serial.begin(9600);
}
void loop()
{
if(Serial.available())
{
int servoAngle = Serial.read();
servoAngles[nextServo] = servoAngle;
nextServo++;
if(nextServo > 3)
{
nextServo = 0;
}
shoulder.write(servoAngles[0]);
elbow.write(servoAngles[1]);
wrist.write(servoAngles[2]);
wrist2.write(servoAngles[3]);
}
}
Sorry for the lengthy post but have been stuck for a while.
|
Concerning robots which rotate at high speed by spinning the drive motors in opposite directions, while still being able to simultaneously move in a direction (translate):
As far as I know this originated with competitive fighting robots, where it is known as "melty brain" or "tornado drive," according to wikipedia, and is based on alternately slowing down the motors on either side as they revolve around the centre of mass.
However, with the whole body spinning so fast how is the current "heading" of the robot established and maintained?
|
I'm trying to send Arduino sensor data to a server using a GPRS shield (SIM900 shield from Geeetech). I have this particular set up because the data will be updated to a website and the device will be roaming. I can't use http://www.cosm.org because to the best of my knowledge that only updates every 15 minutes, I need to update about every 5-10 seconds.
In order to connect I tried the code below to form UDP connection but it does not get sent through to the receiving IP and port. I don't know why. No errors occur on the Arduino side, and the server side has been shown to work with an iPhone app that sends a UDP message.
///connect
void connectUDP()
{
mySerial.println("AT+CSTT=\"APN\"");
delay(3000);
ShowSerialData();
mySerial.println("AT+CIICR");
delay(3000);
ShowSerialData();
mySerial.println("AT+CIFSR");
delay(3000);
ShowSerialData();
mySerial.println("AT+CIPSTART=\"UDP\",\"SERVER IP\",\"SERVER PORT\"");
delay(3000);
ShowSerialData();
mySerial.println();
}
///send udp packet to server
void sendUDP()
{
for(int x = 0; x < 30; x++){
mySerial.println("AT+CIPSEND");
delay(100);
ShowSerialData();
mySerial.println("\"hello world\"");
delay(100);
ShowSerialData();
mySerial.println((char)26);
delay(1000);
ShowSerialData();
}
mySerial.println();
//ShowSerialData();
}
The server side is as follows (written in python):
import SocketServer
PORTNO = 14
class handler(SocketServer.DatagramRequestHandler):
def handle(self):
newmsg = self.rfile.readline().rstrip()
print (newmsg)
self.wfile.write(self.server.oldmsg)
self.server.oldmsg = newmsg
s = SocketServer.UDPServer(('',PORTNO), handler)
print "Awaiting UDP messages on port %d" % PORTNO
s.oldmsg = "This is the starting message."
s.serve_forever()
I can see a possible solution might be to change it to a TCP connection, but I don't know how to do that...
|
I just got a kit and im not sure if its me or not but it appears one of the continuous servos might be broken. What happened first when I plugged it into the microcontroller, it made a humming sound when I sent it commands. The second continuous servo didnt work at all
I played around with different ports on the aurdino based board, and to no avail, just a hum.
Then I removed the humming servo altogether and just placed the second servo alone. the second continuous servo started to move in whatever direction I asked it to.
I plugged the first one in, only the second moved.
then I tried spinning them by hand, the second has much resistance, while the first one has dramatically less resistance, maybe 60% easier to spin by hand.
Is this something I can fix? Has anyone experienced these problems before?
Thanks in advance, you guys are great!
|
Following, the previous question, I am trying to calculate how much one rocker would rotate when the other is being rotated. I attached my calculation here.
I am trying calculate the rotation of gear B that connects to right rocket. Given gear A rotates at 0.05 rad, what is the rotation of gear B in rad? Gear ratio A:D is 4:1, and D:B is 1:4.
At the end, I ended up with rotational gear A = gear B. This somewhat puzzles me. Is my calculation correct?
|
I recently have been working on a little project. Unfortunately, I've ran into a bit of a road block with controlling servos using serial commands. The servos do appear to move when i put in any character into serial, but only a little. When i type in say, 90 characters of random gibberish, both servos connected to my arduino move several degrees. Here's my code:
#include <Servo.h>
Servo ULF; // Upper left front servo
Servo LLF; // Lower left front servo
byte index = 0;
int commandnum=1;
int steps = 0; // position of LLF servo
int partnum = 0; // unused for now
String command = ""; // the command we're building
void setup()
{
LLF.attach(0);
ULF.attach(1);
Serial.begin(9600);
}
void loop()
{
while(Serial.available() > 0) { // while there are more than zero bytes to read
char in = Serial.read();
if(in=='!') {
//! is escape character
commandnum++;
partnum = 0;
Serial.println("New Command. Command #: "+commandnum);
break;
}
command+=in;
if(in == ' ') {
partnum++;
//if we have a space, there's a new section to the command
}
if(command == "LLF") {
Serial.read(); //skip a space
Serial.println("Lower Left Foot Selected.");
int angle = Serial.parseInt(); // find the angle we want
Serial.println("ANGLE: "+String(angle));
for(int pos = 0; pos < angle; pos++) // for loop through positions to reach goal
{
LLF.write(pos); // write servo position
delay(15);
}
for(int pos = angle; pos > 0; pos--) // for loop through positions to reach goal
{
LLF.write(pos); // write servo position
delay(15);
}
}
}
}
Any help would be much appreciated.
EDIT: Another note, nothing is printed in the serial monitor.
Also, these are micro towerpro rc servos.
|
I was jogging the ABB IRB1410 and I noticed that the servo motors are humming even when the joints are not moving. The motor cuts off only when the guard switch in the flex pendant is released.
What kind of mechanism which require the drive motors to keep running even when the joints are not moving ? I went through the manual but no luck. I suppose the holding torque is provided by some braking mechanism so I think I can rule it out.
|
I am currently debugging and tuning an EKF (Extended Kalman Filter). The task is classical mobile robot pose tracking where landmarks are AR markers.
Sometimes I am surprised how some measurement affects the estimate. When I look at and calculate the numbers and matrices involved, I can work out how the update step got executed, what and why exactly happened, but this is very tedious.
So I wonder if anyone is using some technique, trick or clever visualization to get a better feel of what is happening in the EKF update step?
UPDATE #1 (will be more specific and show first approximation of what I have in mind)
What I am looking for, is some way to visualize one update step in a way that gives me a feel of how each component of the measurement affects each component of the state.
My very first idea is to plot the measurement and it's prediction together with some vectors taken from the K matrix. The vectors from K represent how the innovation vector (measurement - measurement prediction, not plotted) will affect each component of the state.
Currently I am working with an EKF where the state is 2D pose (x,y,angle) and the measurements are also 2D poses.
In the attached image(open it in new page/tab to see in full resolution), the (scaled) vector K(1,1:2) (MATLAB syntax to take a submatrix from 3x3 matrix) should give an idea how the first component of the EKF state will change with the current innovation vector, K(2,1:2) how the second component of EKF will change, etc. In this example, the innovation vector has a relatively large x component and it is aligned with vector K(2,1:2) - the second component of the state (y coordinate) will change the most.
One problem in this plot is that it does not give a feel of how the third component(angle) of the innovation vector affects the state. The first component of the state increases a bit, contrary to what K(1:1:2) indicates - the third component of the innovation causes this, but currently I can not visualize this.
First improvement would be to visualize how the third component of the innovation affects the state. Then it would be nice to add covariance data to get a feel how the K matrix is created.
UPDATE #2 Now the plot has vectors in state-space that show how each component of measurement changes the position. From this plot, I can see that the third component of the measurement changes the state most.
|
I am building 4-wheeled, knee-high robot with music and speakers on top that will follow a target person as the target moves around. I would like some help with the setup for tracking the target. The most obvious solutions are Ultrasound or Infrared sensors or some kind of vision tracking, but for this application, I don't want to use them.
Imagine that the robot is placed into a crowded area and asked to move towards a particular person in the area (for the sake of simplicity, assume the person is less than 5 meters away, but could be obscured by an object). Ideally, if someone walked between the target and the robot, the robot would not lose it's path (as would happen with vision-based sensing).
Thanks!
|
I am building a robot that will follow a target as the target moves around. I would like some help with the setup for tracking the target. The most obvious solutions are Ultrasound or Infrared sensors, but for this application, they won't work. Imagine that the robot is placed into a crowded area and asked to move towards a particular person in the area (for the sake of simplicity, assume the person is less than 5 meters away). Is there some kind of radar or radio solution to this, or anything?
|
I bought a new Roboduino atmega 328 board. Basically Roboduino is a modded version of Arduino UNO made by robokits.co.in. The problem is
On Windows Plaform:
When I tried to upload a simple Blink program that's listed in the examples of Arduino IDE 1.0.4, I got error that
avrdude: stk500_getsync(): not in sync: resp=0x00
I chose the correct COM port after verifying it with the Device manager. I installed the Prolific Drivers for the board. I selected the board as Arduino UNO in Arduino IDE.
The complete verbose for the upload is as follow:
D:\Softwares\Installed Files\arduino-1.0.4\hardware/tools/avr/bin/avrdude -CD:\Softwares\Installed Files\arduino-1.0.4\hardware/tools/avr/etc/avrdude.conf -v -v -v -v -patmega328p -carduino -P\\.\COM10 -b115200 -D -Uflash:w:C:\Users\ANKITS~1\AppData\Local\Temp\build5865304215250534760.tmp\Blink.cpp.hex:i
avrdude: Version 5.11, compiled on Sep 2 2011 at 19:38:36
Copyright (c) 2000-2005 Brian Dean, http://www.bdmicro.com/
Copyright (c) 2007-2009 Joerg Wunsch
System wide configuration file is "D:\Softwares\Installed Files\arduino-1.0.4\hardware/tools/avr/etc/avrdude.conf"
Using Port : \\.\COM10
Using Programmer : arduino
Overriding Baud Rate : 115200
avrdude: Send: 0 [30] [20]
avrdude: Send: 0 [30] [20]
avrdude: Send: 0 [30] [20]
avrdude: Recv:
avrdude: stk500_getsync(): not in sync: resp=0x00
avrdude done. Thank you.
When I plug in the board the power LED is on. The 13 pin LED blinks once. When the IDE shows uploading the 13 pin LED blinks 3-4 times and then the error appears on the screen. In between also sometimes it blinks randomly for 5-6 times. I also tried other example programs but the same follows.
I'm using 32 bit Windows 7 Ultimate and the baud rate is set to 9600.
On Ubuntu 13.04:
I downloaded the IDE from Software Center. I was added to the dialouts group on the first run. After connecting the board to my pc I ran two commands lsusb which returned following output:
Bus 004 Device 003: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
and then dmesg. After this when I tried to upload the same program of blink, it gave me following error:avrdude: stk500_recv(): programmer is not responding. I'm using 64 bit ubuntu 13.04 and selected Arduino UNO as the board.
Thank you for reading this long. Please provide me suggestions for the problem.
|
How can I provide more power to a DC motor that is in series behind a receiver circuit hacked out of a cheap RC car without burning up the receiver board? The board runs off two AAs at about 3V. I'm replacing the stock motor with a slightly larger one (12V, taken from a printer) and remounting it on a chassis for a homebrew robotics project... just messing around to learn more. I imagine I could go safely to 4.5V or even 6V with the receiver but I don't want to go much higher since half the stuff is epoxied and I can't really tell what's in there.
What I'd like to be able to do is add an additional two AA batteries behind the receiver to run the receiver system at 6V but add another two 3V 123A batteries to have the motor at 12V with the ability to run with the higher current draw due to the heavier load the motor will handle on its fancy new chassis... but without pulling that current through the receiver circuit.
My first thought is to simply connect my 123As negative to the motor and positive to a common ground... but I'm really not sure and I want to be careful to not damage the circuit or batteries. My next thought is to simply build a single power supply out of my 123As and use a current divider but I've only read about them and never actually tried so.
I've been doing some of those kiddie "electronic playgrounds," a few books and probably cost Google an extra few bucks in energy costs and I'm still kinda at a loss.
|
I ran into confusion while reading about motors.
Consider a motor with these specs:
Maximum motor voltage - 6VDC
No load current - 250mA max.
Stall current - around 1A
I am considering using the Texas Instruments L293D, with these specs:
Output Current - 600 mA Per Channel
Peak Output Current - 1.2 A Per Channel
If I use the L293D to run 1 motor (back and forth), is this safe? What would happen if my motor requires more than 600mA? Does this simply mean I need different driver IC?
Also, the specs say that if I want to drive 2 motors then i'll need to compensate for the current. Is it current from my power supply or from the motor driver?
|
I used a Turnigy 2200mAh 3S 25C Lipo battery pack with Turnigy balancer & Charger 2S-3S for about a month. Yesterday I left the battery plugged into four ESCs of my quadrocopter. Today I've found the battery totally discharged. When I tried to charge it, the charger showed it as faulty. After replugging it to the charger it showed it as fully charged.
How can I charge it now?
P.S. I've got a multimeter, but I do not know what and how to measure... The battery pack has two plugs: one is connected to the charger and the other to the ESCs...
|
I would like to ask which is better to design the multicopter with odd or even number of propellers? and why?
|
I bought an RC car about a year ago. A few days later I integrated an arduino nano into the vehicle. The only thing the arduino does is to receive the RC signal and pass it on to the esc/servo. So, basically it just does a big amount of NOTHING :)
Right now the wiring looks like this:
[Remote] -> [rc receiver] -> [arduino] -> [servo/esc/lights]
I added lights and I did some experiments with distance sensors and I will try to integrate car control via xbee + processing. This works via serial already.
What else could be possible with a setup like that? Here are some of my ideas:
perhaps some sort of autonomic driving? The car is built for offroad and the suspension is not too bad but it is pretty fast (40 km/h) so a crash would be fatal.
FPV (first person view) driving? I could add another servo to move a small camera.
"swarm intelligence"? I have built two of those vehicles. Both feature the arduino nano, a zigbee and LED front lights.
steering correction? I could integrate a gyro sensor to check if the car is not driving straight when it should.
telemetry to another arduino? I could build some sort of arduino-zigbee-handheld that shows me some information for both cars like motor temperature, current speed, uptime, battery voltage, sensor values etc.
Any ideas, anyone? Right now it is just driving like it normally would. I integrated an arduino into an RC toy that does an awesome amount of NOTHING. Makes me feel pretty stupid.
|
I have a matrix of M measurements and N objects. Each cell contains a cost of assignment a particular measurement to the object. I want to assign them optimally. As the condition, only one measurement can go to one object, and one measurement can go to only one object. I want to set some cost threshold, in effect there may be some measurement or object, which is not assigned at all.
How can I do it?
I was recently thinking of the auction algorithm, which however will never leave any unassigned measurement or object. If that is false, correct me please. Or help with some alternative solution. Thanks for your time!
|
Okay, this might sound like a stupid question but, is there some sort of a permission in the US I might require to fly a quadcopter or a UAV for that matter? I couldn't find much help anywhere else.
|
I'm looking for a robot that is capable of moving around and has arms that can get objects in one place and drop in another. Something akin to what we see in most sci-fi movies, though much simpler. It may run on legs, wheels or tracks; it may have claws or hands. I'm looking for open-sourced design, schematics, specifications of the parts, coding - the whole package. It may be specific cases or projects/initiatives with a growing collection of robots.
As long as it can take out the trash, it's perfect. ;D
|
So building a quadrocopter from scratch HAS a lot of decision making, and I need some input on the material choice.
I have short listed Aluminum and Carbon Fiber for the Arms and support of the Quadrocopter.
I am a little short on cash to experiment with both of them.
So considering that I have enough money to buy either of those, and assuming that I have access to general tools like a Table Saw, Horizontal Band Saw, CNC Router and a Water jet.
What would be a better material to work with
EDIT:
I will be deciding the specs around the frame so as to allow me some design liberty. So right now, my goal is to assemble a very durable, as-light-as possible frame, which can withstand a lot of experimentation on the electrical side.
|
I am looking to upgrade the motors for SeaPerch underwater ROVs so we can carry heavier payloads and more equipment.
My question is, should I look for motors which have a higher RPM and lower torque, or with lower RPM but higher torque to gain a substantial power increase? If the latter, what threshold of RPMs should I stay above to maintain speed?
We are currently running Jameco PN 232022 motors with ~1 1/2" props (same setup as here). They are mainly run at max power as our ESC currently consists of a fuse and a toggle switch.
|
Some time ago I saw a demo of a small 'toy tank' with a single camera mounted on it. This tank was able to drive around the floor and detect objects and then move/steer to avoid them.
The interesting part was that it used a single camera vision system and as far as I remember was taking advantage of the floor being flat. and then using the rate a feature was moving in the scene relative to the motors and directions of travel to evaluate and hence map the scene.
Can anyone send me pointers what to search for to get some more information on this, or some pointers to codebases that can do this.
The reason I ask is that this was a single camera system from a number of years ago (5+) and therefore (from what I remember) was a relatively low compute load.
I was intending to try this out on a Raspberry PI to build a car/tank that maps a room or set of rooms.
|
I have an Autonomous Lawn mower(ALM) which can mow a certain lawn area when that area is bounded by a perimeter wire. Even when that perimeter wire is removed, it has to mow the above mentioned area accurately without slipping into a neighboring area.
Constraints and problems:
The ALM is an open loop system.
Differential GPS was tried, but it did not yield proper results.
Any iterative pattern of area coverage can be used provided the error in each iteration is not added cumulatively which can result in unpredictable error in the end.
I do not expect full fledged solution. But I need a starting point to understand motion planning particularly for unbounded robotics to solve this problem.
I searched on internet to know about the knowledge sources about motion planning but could not get good results. Can anyone guide me to know about such sources preferably books and articles on internet which can help me to solve this problem?
EDIT:
Addition of information:
The above picture shows the irregular lawn area which does not have any enclosures and perimeter
wire
1.The red mark shows the center point of lawn .
2.The grey area is the initial scaled down area which resembles in shape to the larger area .I could not draw the grey area which exactly resembles the larger green area .
3.The grey lines are the contours which from the tracks to be followed by the lawn mower
Idea description:
1.Using planimeter app for onetime , the shape and dimension of the lawn area (green area) can be known
Link:https://play.google.com/store/apps/details?id=com.vistechprojects.planimeter&hl=en
2.Center of polygon can be found by using the method in the following link
http://en.wikipedia.org/wiki/Centroid#Centroid_of_polygon
3.Calculation of area of grey shape in the above figure .
4 . Grey shape is the least possible area which can be grazed by the ALM . Grey shape is similar to the green area shape and it is formed when Green area is scaled down
To determine the scale down factor which is a numerical value ‘ n’ (n<1)
Where Grey area = n * Green area
Once the Grey area is known , the number of contours or tracks to be grazed by ALM have to be determined manually .
The width of contour is equal to the distance between the blades on the either end i.e. the width which can be grazed by ALM in a single stroke .
Green area = Grey area + area of track 1 + area of track 2 + area of track3 + . . . . . . + area of track n
5.Once the lawn mower is switched on ,it should reach the center of the lawn (red mark showed in the above figure)
6.Then, ALM should graze the least possible area or grey area .
7.After that ALM should Switch to contour circumscribing the grey area . It should continue circumscribing in each track till all the tracks are completed( decision has to be made by validating against the calculated and preset value ' No.of tracks' in ALM)
In this way entire lawn can be mowed without the need of perimeter wire and also ALM would not mow the neighbor’s lawn
Challenges :
a. Enable ALM to reach the center point of the lawn
a. To make ALM mow the grey area accurately
b. To make the ALM switch from one track to track .
c. To bypass the obstacle in track and return to the same track .
When i mentioned this idea to my colleague ,he mentioned the about possible cumulative addition of error in each iteration resulting in an unpredictable error in the end .
I intend to minimize the error and fix the boundary as correct as possible.
In fact this deviation should be predictable before it can be corrected .
|
Visible worms, pests, and diseased parts of plants emit a unique odor (Volatile Organic compounds with different concentrations). I understand that sensors which can quantitatively detect these compounds development are being developed. My idea is to build a swarm of robots which can spray pesticides by detecting VOCs on three targets present on plants across the fields.
Target 1: Visible worms, pests, larvae. May these can be mechanically eliminated
Target 2: Invisible pathogens on certain areas of a plant
Target 3: Areas where pesticides have to be sprayed to prevent disease
For these targets, pesticide has to be administered in the correct concentration
This idea can optimize the use of pesticides and treat the plant properly
Questions:
Is swarm robotics still sci-fi or Did any one implement it?
Are the any specific scenarios where implemented swarm robotic systems are coming to help and establish the ease?
Which is the implemented system or idea in conception that can help in formation of solution for the above problem?
How much time is required approximately to realize this idea?
Hope this question does not sound sci-fi and is practical and the intention is to solve a definite problem
How can i work by following some steps to further make this idea concrete
|
I'm trying to use a PID to stabilize a system described from the following difference equation:
$$y_{k+1} = a y_k \sqrt{(1-y_k)}~~~ + b y_{k-1} ~+ c u_k$$
Can I use Ziegler-Nichols's rules to find PID parameters in this situation?
To be more precise. My system is an Apache Http Server, in particular I'm trying to model how the CPU load can change in function of KeepAlive parameter. When KeepAlive grows the cpu load should decrease.
So:
$$cpu_{k+1} = a \cdot cpu_k \sqrt{(1-cpu_k)}~~~ + b \cdot cpu_{k-1} ~+ c \cdot keepAlive_k$$
Obviously the Cpu load is a scalar $\in [0,1]$ , $keepAlive$ is just a time and the $a,b,c$ parameters are known to me through experimental data and multiple regression on them.
|
While looking up information for the right propellers for my quadcopter, I realized that they had different orientations i.e. Clockwise and Counterclockwise. On further research I found that all multi-rotors have different combinations of these orientations. So my question is WHY? How does it matter if the propeller is turning clockwise or anti-clockwise?
|
I want to find the instantaneous center of rotation of a differential drive robot.
Assuming I know that the robot will travel with a particular linear and angular velocity $(v,w)$ I can use the equations (given at A Path Following a Circular Arc To a Point at a Specified Range and Bearing) which come out to be:
$$x_c = x_0 - |\frac{v}{w}| \cdot sin(\theta_0)$$
$$y_c = y_0 - |\frac{v}{w}| \cdot cos(\theta_0) $$
I'm using the webots simulator and I dumped gps points for the robot moving in a circle (constant v,w (1,1)) and instead of a single $x_c$ and $y_c$ I get a center point for every point. If I plot it out in matlab it does not look nice:
The red points in the image are the perceived centers, they just seem to trace the curve itself.
Is there some detail I am missing? I'm really really confused as to what's happening.
I'm trying to figure out the center so I can check whether an obstacle is on this circle or not and whether collision will occur.
|
This question stems from previous question, where I asked why does the prop orientation matter so much for a multirotor. But on further research† I found that these reasons need not apply to a tri copter. and then again. Why?
Are these reasons general for all multi rotors with odd number of motors? or even rotors?
† This forum talks a lot about tricopters and prop orientations but nothing really answers the question.
|
Using ArduPilot software (fixed wing, ArduPlane), I know that after I boot up I need to keep the system sit still while the gyros initialise.
When I have ground station in the field it's easy to know when it's safe to launch because the telemetry message tells me. But I don't always fly with a ground station. In these situations I currently just sit and wait for a while before arming, then again before launching.
Is there some reliable rule of thumb? information in the blinking of the arming switch or buzzing that I haven't worked out yet? This UAV has PX4 autopilot hardware (with both Px4FMU and PX4IOBoard), including with buzzer and illuminated arming switch. The LEDs on the board are obscured (but I could make light channels from them if required).
(Note: I'm asking this question here to test the theory that robotics stack exchange night be an appropriate forum for these sorts of questions, which has been suggested a couple of times in response to the Area51 drones proposal.)
|
I'm robotic engineer, using OpenSCAD to model robotic components (gears, pulleys, parts, etc). But I need an application to model the physics and interaction of the components (for i.e. how will robot move if I rotate a given gear).
So, is there any software I can use for modelling interactions in Linux? Google SketchUp is good, but I can't use it in Linux.
|
I am trying to write a C code for a pan-tilt unit model ptu-d46 using visual studio 2010 in Windows 7, but I can't find any tutorial or reference on how to do so. All the user's manual mentions is that there is a C programmer's interface (model ptu-cpi) available, but it doesn't say where to find it nor how to use. I looked for it on google but couldn't find anything.
There is a command reference manual along with the user's manual, but it only shows the different commands to control the tilt and does not explain how to make a C program that connects to the tilt controller and sends queries to it.
Does anyone please have an idea of where I should look or if there are any open source programs for that. I'm not trying to make a complicated program. I just need it to connect to the tilt controller (the computer is connected via USB cable to the host RS232 of the tilt controller) and makes it nod to say "Yes" and "No" !
|
Does anyone have experience with this ez-b, it is sold by ez-robot.com and comes with an SDK for Visual Studio
It has direct scripting in runtime and through usb or bluetooth, wifi, irc, https
My question is, if I get a regular arduino board, will i be able to do the same?
from what ive read, arduino needs to hold the instructions on its own memory, but I rather have the brain in the computer, and feed signals back and forth to the microcontroller
Also, is arduino alone, a step down as the website niceley puts it
Thanks for your help in advance
|
I am using a LSM303 sensor to compute a heading and I want to turn my robot to a heading.
I have the simple code here:
int mag;
mag = compass.heading((LSM303::vector){0,-1,0});; //read the angle of the robot
Serial.println(mag);
while (mag != angle){
//while it isn't the desired angle turn and continue to update the robot angle
trex.write(0xE9);
trex.write(90);
trex.write(90);
mag = compass.heading((LSM303::vector){0,-1,0});; //read the angle of the robot
}
In a function called with a speed and angle (heading), the trex part tells the motor controller to turn on a point and the while loop should test for when the desired heading is reached. However, testing using a couple of instances of Serial.println(mag); I have determined that once inside the while loop, mag never changes which just means the robot turns indefinitely.
I have no idea why this would happen. Perhaps someone here does?
Thanks.
|
I'd like to know if anyone has had success detecting a warm-bodied mammal (ie. Human) using standard off the shelf, inexpensive sensors?
Ideally, I'd like to use an inexpensive sensor or combination of sensors to detect a person within a room and localize that person. I would like the robot to enter a room, detect if a human(s) is/are present and then move to the detected human. The accuracy does not need to be 100%, as cost is more of a factor. I'd like the computational requirements of such a sensor to be such that it can run on an Arduino, although if it's impossible, I'd be willing to utilize something with more horespower, such as a Raspberry Pi or a BeagleBone Black. I have a few thoughts; however, none of them are ideal:
PIR Sensor - Can detect movement within a large field of vision (ie. usually 120 degrees or more). Might be the closest thing to a "human" detector that I'm aware of; however, it requires movement and localizing/triangulating where a person is would be very difficult (impossible?) with such a large field of vision.
Ultrasound - Can detect objects with good precision. Has a much narrower field of view; however, is unable to differentiate between a static non-living object and a human.
IR detectors - (ie. Sharp range sensors) Can again detect objects with great precision, very narrow field of view; however, it is again unable to differentiate objects.
Webcam + OpenCV - Possibly use face detection to detect human(s) in a room. This may be the best option; however, OpenCV is computationally expensive and would require much more than an arduino to run. Even on a Raspberry Pi, it can be slow.
Kinect - Using the feature detection capabilities of Kinect, it would be relatively easy to identify humans in an area; however, the Kinect is too expensive and I would not consider it a "cheap" solution.
Perhaps someone is aware of a inexpensive "heat-detector" tuned to body heat and/or has had success with some combination of (#1-4) above and would like to share their results?
|
I am working on a robotic application under R.O.S. groovy Galapagos.
I would like to make a tutorial about how create a template app with catkin_create_qt_pkg.
I'm unable to call the script catkin_create_qt_pkg from my catkin workspace.
I found it at the root : _/opt/ros/groovy/qt_ros/qt_create/script_
But even if I try to execute it as sudoer I got an error.
ImportError: No module named qt_create
I'm unable to determine what I have to do to make it work.
Why?
|
I've worked with wiimote accelerometer, but I think now I want to move past that. Mostly because I want to have a wider range of available gestures and I think that using only one accelerometer has too many limitations for what I want to do. I'm looking for something compatible with arduino or RPi. Does anyone have recommendations on how I should do this?
|
I want to embed environmental data collected from sensors into a live video stream from a camera. Has anyone done this or know how I would go about doing something like this? Is there a library available for the arduino or RPi?
|
So I want to program something that will simply push a button, but controllable over ethernet. I'm new to robotics so I don't know where to start. What's the best way to control an actuator over a network connection?
|
In general, is a Raspberry Pi processor powerful enough for a mobile chatbot? I want to make a small mobile robot that is like a chatbot. Is a Raspberry Pi processor powerful enough for any type of AI robotics?
As far as a mobile robot, I want to make a wheeled robot about one foot in every dimension. The chatbot abilities will be from ProgramPY-SH, a new chatbot program that uses Xaiml databases. The chatbot works by looking through a database for a match of the user's input (vocal or text-based). It then acts according to the instructions given by the XML-like database.
|
I am currently working on a legged hexapod which moves around using a tripod gait. I have two sets of code to control the tripod.
Set 1: Time based control
In this code set, I set the tripod motor set to move at their rated rpm for a required amount of time before shifting to the other tripod motor set.
PID control would be based on counting the number of transitions using an optical speed encoder, Calculating the error based on difference between actual speed and required speed and then adjusting the error with fixed Kd and Ki values.
Set 2: Transitions based control
In this code set I count to the number of transitions required to complete one rotation of the leg(tripod motor set) before starting the other leg(tripod motor set).
PID control would be time based. Calculation of error would be the difference in time taken for individual motors of the motor set.
Query:
The set 2 shows promising results even without PID control, but the first set does not.Why so? The motors are basically set to move 1 rotation before the other set moves.
Would the speed differences between the motors cause it to destabilize?
How often do I update the PID loop?
My robot seems to drag a little bit. How do I solve this?
|
I'm involved in research on psychologically plausible models of reinforcement learning, and as such I thought it'd be nice to try and see how well some to the models out there perform in the real world (i.e. sensory-motor learning on a mobile robot). This is already been done in some robotics labs, such Sutton's implementation of the Horde Architecture on the "Critterbot". However, these implementations involve robots custom-build by robotics experts in order to deal with the trials and tribulations of learning on a long time-scale:
"The robot has been
designed to withstand the rigors of reinforcement learning
experiments; it can drive into walls for hours without damage or burning out its motors, it can dock autonomously
with its charging station, and it can run continuously for
twelve hours without recharging."
Unfortunately I'm no expert when it comes to designing robots, and don't have access to a high quality machine shop even if I did; I'm stuck with whatever I can buy off-the-self or assemble by hand. Are these constraints common enough for amateur robotics suppliers to cater to, or should I expect to have to start from scratch?
|
How would you go about building a robot that can use a computer? Type on the keyboard, move & click mouse? I am talking about physically manipulating the hardware inputs, and the robot would be able to see the screen. Not connected to anything. It's purely autonomous. My hope is that this will replace human QA testers.
|
A little background of my aim
I am in the process of building a mobile autonomous robot which must navigate around an unknown area, must avoid obstacles and receive speech input to do various tasks. It also must recognize faces, objects etc. I am using a Kinect Sensor and wheel odometry data as its sensors. I chose C# as my primary language as the official drivers and sdk are readily available. I have completed the Vision and NLP module and am working on the Navigation part.
My robot currently uses the Arduino as a module for communication and a Intel i7 x64 bit processor on a laptop as a CPU.
This is the overview of the robot and its electronics:
The Problem
I implemented a simple SLAM algorithm which gets robot position from the encoders and the adds whatever it sees using the kinect (as a 2D slice of the 3D point cloud) to the map.
This is what the maps of my room currently look like:
This is a rough representation of my actual room:
As you can see, they are very different and so really bad maps.
Is this expected from using just dead reckoning?
I am aware of particle filters that refine it and am ready to implement, but what are the ways in which I can improve this result?
Update
I forgot to mention my current approach (which I earlier had to but forgot). My program roughly does this: (I am using a hashtable to store the dynamic map)
Grab point cloud from Kinect
Wait for incoming serial odometry data
Synchronize using a time-stamp based method
Estimate robot pose (x,y,theta) using equations at Wikipedia and encoder data
Obtain a "slice" of the point cloud
My slice is basically an array of the X and Z parameters
Then plot these points based on the robot pose and the X and Z params
Repeat
|
I need to control quadrotor from a PC, without using a joystick.
I have got a mini-beetle quad V929 Beetle 4-Axis and also have this NRF24L01+ Wireless Transceiver Module Chip (2.4 GHz transceiver)
Is it possible to write an Arduino program to make them speak to each other?
I did some research and found that the quad V929 model uses FlySky protocol and only works with A7105 NRF24L01 2.4 GHz transmitter chip not the one which I mentioned above.
Are there any other better ways of controlling the quad from PC or Arduino board?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.