instruction
stringlengths 40
28.9k
|
---|
I'm building a walking robot that will need to know when it moves forward. I'm using on-board intelligence and I plan on using accelerometers, gyros, and magnometers (if needed) to be able to detect if the robot moves forward. The problem is, I dont know how to program an Internal Navigation System or an IMU. What software algorithms are needed?
To clarify my problem, I need to know how to program the micro controller to read the sensors and be able to tell if the robot has displaced itself forward since a previous measurement.
Also if I used this sensor board (or similar) could I use it to determine the displacement.
|
I'm planning to write an Inverse Kinematics controlled 6 dof virtual robot for Android. I did some research on packages avaliable and couldn't choose the right one which will satisfy my needs on this project. I've seen a work with Eigen in C++, and used it, it was just fine. But since i'm not so experienced in Java, i wanted to ask before i start, if someone knows some appropiate packages for these operations.
Here is what i found so far:
JAMA,
Vecmath,
Jmathtools,
EJML,
JAMPACK
I ask this because i really dont want to get stuck in the middle of my project. Thanks in advance.
|
What are the pros/cons of the different visual odometry options?
Stereo Camera
Optical Flow
SLAM
other?
Criteria:
how well it performs vs other odometry options/sensors (lidar, radar)
sensor fidelity
computation
accuracy
precision
drift
native resilience and repeadability in sensor noise or vehicle speed
ease of integrating with IMU/GPS
etc
In general, of course, because there are a lot of different ways the trade-offs go when we get into specifics about applications and hardware. I'm asking out of curiosity, not for designing anything in particular.
|
The answers I received to the question on training a line following robot using reinforcement learning techniques, got me to think on how to train a robot. I believe there are essentially two ways -
Train the physical robot.
Model the robot and simulate the training.
Did I miss something?
Approach 2 is definitely the better approach. However, a priori knowledge of the motion (response), a certain PWM signal (stimulus) would cause when the robot is in a given state is required. The motion caused by a PWM signal may depend on the (1) current battery voltage, (2) the mass of the robot and the (3) current velocity (did I miss something?).
How do I model such a robot? And how do I model it quick? If I change the battery or add a few boards and other peripherals and change the mass of the robot, I would have to remodel and retrain the robot. Can I do this by providing some random stimulus PWMs and measuring the response?
added: My related question in dsp.SE
Update: A suggested edit to the title by Ian worth mentioning - "How do I model train a robot so that if its dynamics change, it does not need complete re-training?" I think this is a good question too but different from the one I am asking here. I am okay with re-training for now.
|
Digital compasses (magnetometers) require a hard/soft iron calibration in order to be accurate. This compensates for the magnetic disturbances caused by nearby metal objects -- the robot's chassis.
(image from http://diydrones.com)
However, digital compasses are also susceptible to the electric fields caused by the relatively high amount of current drawn by motors.
In order to get an accurate compass reading, what is the best way to measure (and compensate for) the interference caused by changing motor current levels?
|
I am trying to use a stereo camera for scene reconstruction, but I can usually only obtain sparse point clouds (i.e. over half the image does not have any proper depth information).
I realize that stereo processing algorithms rely on the presence of texture in the images and have a few parameters that can be tweaked to obtain better results, such as the disparity range or correlation window size. As much as I tune these parameters, though, I am never able to get results that are even remotely close to what can be obtained using an active sensor such as the Kinect.
The reason why I want that is because very often point clouds corresponding to adjacent regions don't have enough overlap for me to obtain a match, so reconstruction is severely impaired.
My question to the Computer Vision experts out there is the following: what can I do to obtain denser point clouds in general (without arbitrarily modifying my office environment)?
|
I have a handful of 31.2oz-in stepper motors (Mouser.com - Applied Motion: HT17-268D), and I was curious if they would be big enough to run a 3D printing/cutting/etching type (think RepRap) of machine. I had in mind to attach them via a simple gear to a screw-type drive to run the tool head back and forth.
Maximum bed size would probably be ~1.5'3.
Heaviest tool head would be something about half the weight of a Dremel tool.
Hardest substances I would use it on would probably be hardwoods (with high speed cutter) and copper (for PCB etching).
How do I figure the amount of torque needed to drive the head, and would the motors that I already have be big enough to do the job?
|
In our lab we use LiPo batteries to power our quadrotors. Lately we have been experiencing stability issues when using certain batteries. The batteries seem to charge and balance normally and our battery monitor indicates they are fine even when putting them under load. However when we attempt to fly the quadrotor with one of these batteries, manually or autonomously, it has a severe tendency to pitch and/or roll. My guess is that the battery is not supplying sufficient power to all the motors which brings me to my question. Is this behavior indicative of a LiPo going bad? If so what is the best way to test a battery to confirm my suspicions?
|
Whenever I read a text about control (e.g. PID control) it often mentions 'poles' and 'zeros'. What do they mean by that? What physical state does a pole or a zero describe?
|
We are trying to power this motor with this motor driver , using a 11.1V 2.2Ah lithium-ion polymer battery.
(We're in over our heads with this and really need the help)
We checked with the company (E-flite) and the motor is definitely DC -- we're a bit confused as to the purpose of three wires, and how we should connect them to the motor.
Any help would be appreciated.
|
I would like to have a better understanding of work in the field of "Navigation Among Movable Obstacles". I started off with Michael Stilman's thesis under James Kuffner, but that has not yet sated my appetite.
I am currently trying to simulate a scenario where debris (Tables and Table parts) from a disaster scenario block pathways. The debris forms part of a movable obstacle. The robot which will be used is a bipedal humanoid.
The thesis describes an approach to define the search space of possible actions leading from the start point to the goal. However, it assumes a mobile robot which works via gliding.
I think the state space definitions would change for a bi-pedal robot. Why is why I wonder what other work is being done in this field. Perhaps the work of other research groups could give me clues as to how to design and perhaps reduce the search space for a bipedal humanoid robot.
An implementation of Navigation among Movable Obstacles would also aid me in understanding how to reduce the search space of possible actions.
So does anyone know of a working implementation of Navigation among movable obstacles?
Any supporting information about other professors or research groups working on similar problems would also be very useful.
I hope this edit is sufficient for the problem description.
|
I want to learn robotics and build my first robot. I am looking for a well supported kit that is simple enough and can walk me through, the initial stages of my intellectual pursuit in Robotics. I want to be able to do the basic things first and build a solid foundation in robotics. And then I want to be able to use the solid foundation, to gain confidence in my ability to build new and interesting robotic contraptions. In other words, I want to be able to follow the rules off the game to gain a solid foundation and then once I'm comfortable with what I know, I want to break free of the rules and start making my own robots.
I would like help with 2 things,
I would like to begin my robotics learning with a good kit that can walk me through my initial stages. I expect that this initial stage might take quite a while. So, any recommendations for how I can start and/or what kit I can buy, to get my feet wet, would be helpful.
I would like suggestions for "other" actions I can take, that will set me on a path to gain confidence in my knowledge of robotics.
A little bit about myself. I have a BS and MS in IT. So I am not new to programming. I like to code in golang and haskell. I do not know if it is possible, but it would be awesome if I can write the software aspect of all my robotic projects in haskell.
Thanks
|
RS232 is not popular as it used to be and is mainly replaced by USB [wikipedia]. Problems such as mentioned in this question doesn't help its reputation either.
In a new system design therefore, one could think of using USB instead of Serial Port for communication. However, it still seems like RS232 is the serial communication protocol/port of choice.
Why is that? I understand changing old machinery that work with RS232 is costly, but what prevents new system designers from using USB instead of RS232?
|
As an industrial roboticist I spent most of my time working with robots and machines which used brushless DC motors or linear motors, so I have lots of experience tuning PID parameters for those motors.
Now I'm moving to doing hobby robotics using stepper motors (I'm building my first RepRap), I wonder what I need to do differently.
Obviously without encoder feedback I need to be much more conservative in requests to the motor, making sure that I always keep within the envelope of what is possible, but how do I find out whether my tuning is optimal, sub optimal or (worst case) marginally unstable?
Obviously for a given load (in my case the extruder head) I need to generate step pulse trains which cause a demanded acceleration and speed that the motor can cope with, without missing steps.
My first thought is to do some test sequences, for instance:
Home motor precisely on it's home sensor.
Move $C$ steps away from home slowly.
Move $M$ steps away from home with a conservative move profile.
Move $N$ steps with the test acceleration/speed profile.
Move $N$ steps back to the start of the test move with a conservative move profile.
Move $M$ steps back to home with a conservative move profile.
Move $C$ steps back to the home sensor slowly, verifying that the sensor is triggered at the correct position.
Repeat for a variety of $N$, $M$, acceleration/speed & load profiles.
This should reliably detect missed steps in the test profile move, but it does seem like an awfully large space to test through however, so I wonder what techniques have been developed to optimise stepper motor control parameters.
|
I understand the basic principle of a particle filter and tried to implement one. However, I got hung up on the resampling part.
Theoretically speaking, it is quite simple: From the old (and weighted) set of particles, draw a new set of particles with replacement. While doing so, favor those particles that have high weights. Particles with high weights get drawn more often and particles with low weights less often. Perhaps only once or not at all. After resampling, all weights get assigned the same weight.
My first idea on how to implement this was essentially this:
Normalize the weights
Multiply each weight by the total number of particles
Round those scaled weights to the nearest integer (e.g. with int() in Python)
Now I should know how often to draw each particle, but due to the roundoff errors, I end up having less particles than before the resampling step.
The Question: How do I "fill up" the missing particles in order to get to the same number of particles as before the resampling step? Or, in case I am completely off track here, how do I resample correctly?
|
What are some good strategies to follow while designing power supply for electrical systems on mobile robots?
Such robots typically comprise of systems with
microprocessor, microcontroller, DSP, etc units and boards along with immediate peripherals
Motor control
Analog Sensors(proximity, audio, voltage, etc)
Digital Sensors (Vision, IMU, and other exotica)
Radio comm circuits (Wifi, Bluetooth, Zigbee, etc)
Other things more specific to the purpose of the robot being designed.
Are there unified approaches/architectural rules to designing power systems which can manage clean power supply to all these various units which may be distributed across boards, without issues of interference, common ground, etc? Furthermore, also including aspects of redundancy, failure management, and other such 'power management/monitoring' features?
well explained examples of some such existing power systems on robots would make for excellent answers.
|
Is it possible to use the Matlab's system function to call ROS commands?
For example, using system('rostopic pub /cmd_vel geometry.msgs.Twist {....}
or system('rospack find ipc_bridge).
I'm trying to send some commands to ROS without using something like IPC-Bridge.
PS: I know, however, that I need to use IPC-Bridge to subscribe to topics.
|
I'm interested to build Robot from my imagination, and I was looking to purchase a robotic kit.
I find the Lego Mindstorm NXT 2.0 really interesting for many reasons : You can plug whatever brick you want, and you can develop in the language you want.
I am a developer, and the use of this kind of robotic would be interaction mostly (not moving, so the servo motors are useless to me, at least now).
But regarding the spec of the NXT main component, I feel it's a bit low (proc, ram & rom).
That made me wonder if any of you know something similar (where I can plug whatever I want on it, and most importantly, program the reaction), but with a more powerful hardware ?
Price will also be a limitation : I like the NXT also because I can build what I want under 300 USD. I don't want to spend 10k USD on my first kit, but I would appreciate buying a better piece of robotic if the price isn't too distant from the NXT's.
Do you have some alternatives to check out ?
Thanks for your help ! :)
|
I was wondering whether something like this is possible: A block of ice(say) needs to be transferred piece by piece from a source to a destination with the help of 5 robots standing in a straight line between the source and destination. The first robot picks up a piece of the block from the source and checks if the next robot in line is busy. If yes, it waits for it to complete its task and proceeds, otherwise it transfers the piece and goes back to collect another piece. Please help me on implementing this if it is possible, as I am thinking to make it a project topic.
to clear out the confusions, here's a smaller prototype of the project i'm thinking,
i have two cars, one wired, another wireless. the wired car is the master here, the wireless, the slave. through a remote, i send a command to the wired car to instead command the wireless car to move forward. the wired car will then check if the wireless slave is already executing some previously given command or no, and accordingly send the command.
conversely, the master may send the command as soon as it receives it, it's on the slave now to complete the task it's doing, and then execute the command it just received.
|
I dont understand integral part of PID controller. Let's assume this pseudocode from Wikipedia:
previous_error = 0
integral = 0
start:
error = setpoint - measured_value
integral = integral + error*dt
derivative = (error - previous_error)/dt
output = Kp*error + Ki*integral + Kd*derivative
previous_error = error
wait(dt)
goto start
Integral is set to zero in the beginning. And then in the loop it's integrating the error over the time. When I make a (positive) change in setpoint, the error will become positive and integral will "eat" the values over the time (from the beginning). But what I dont understand is, when error stabilizes back to zero, the integral part will still have some value (integrated errors over time) and will still contribute to the output value of controller, but it should not, because if error is zero, output of PID should be zero as well, right?
Can somebody explain me that please?
|
Robots are somewhat videogenic, and the old saying "show me, don't tell me" is especially applicable.
But of course, a video is not a question, so it doesn't fit the Stack Exchange format. Maybe video links would be more suitable in a CodeProject post. It just seems like this board hits the right cross-section of people, whose projects I would be interested in seeing.
|
I am making a 2 wheel drive robot.
Suppose I know that my robot is going to weight x kg when finished and I know the diameter of the wheels y (geared motors will be connected directly to the wheels). I can choose from several geared motors and I know the peak torque of each motor and the idling speed.
How can I calculate the load that a specific motor can take? I.e. will a motor with a given torque be able to move my robot without being too overloaded? What rpm will the motor have when it has load?
|
I recently got an arduino wifi shield known as "juniper" (I believe it was by cutedigi). I've tried to find code examples, but when I saw code, it was un-commented and very little explained, I could really use a tutorial or some sample code with a good explanation, can anyone help me find a place to start? I found a piece of code here: http://arduino.cc/forum/index.php?action=printpage;topic=103582.0
and I just want to connect to a network, maybe send some get requests, or open a socket.
EDIT:
after poking around for a while, i found documentation, but I still can't get it to work.
my code:
http://pastie.org/5455603
I can't seem to get any input at all from the wifi shield.
|
For someone interested in robotics but do not know the ABC of robotics or mechanical/electronic engineering .What's a good roadmap for becoming an amateur roboticist . I'm studying theoretical physics so that I have no problems on the physics/math . If the question is too broad and doesn't meet the criteria of posting on this site . Please inform me of any helpful advice/study material etc. before the question get closed .
Thanks in advance.
|
I'm using an EKF for SLAM and I'm having some problem with the update step. I'm getting a warning that K is singular, rcond evaluates to near eps or NaN. I think I've traced the problem to the inversion of Z. Is there a way to calculate the Kalman Gain without inverting the last term?
I'm not 100% positive this is the cause of the problem, so I've also put my entire code here. The main entry point is slam2d.
function [ x, P ] = expectation( x, P, lmk_idx, observation)
% expectation
r_idx = [1;2;3];
rl = [r_idx; lmk_idx];
[e, E_r, E_l] = project(x(r), x(lmk_idx));
E_rl = [E_r E_l];
E = E_rl * P(rl,rl) * E_rl';
% innovation
z = observation - e;
Z = E;
% Kalman gain
K = P(:, rl) * E_rl' * Z^-1;
% update
x = x + K * z;
P = P - K * Z * K';
end
function [y, Y_r, Y_p] = project(r, p)
[p_r, PR_r, PR_p] = toFrame2D(r, p);
[y, Y_pr] = scan(p_r);
Y_r = Y_pr * PR_r;
Y_p = Y_pr * PR_p;
end
function [p_r, PR_r, PR_p] = toFrame2D(r , p)
t = r(1:2);
a = r(3);
R = [cos(a) -sin(a) ; sin(a) cos(a)];
p_r = R' * (p - t);
px = p(1);
py = p(2);
x = t(1);
y = t(2);
PR_r = [...
[ -cos(a), -sin(a), cos(a)*(py - y) - sin(a)*(px - x)]
[ sin(a), -cos(a), - cos(a)*(px - x) - sin(a)*(py - y)]];
PR_p = R';
end
function [y, Y_x] = scan(x)
px = x(1);
py = x(2);
d = sqrt(px^2 + py^2);
a = atan2(py, px);
y = [d;a];
Y_x =[...
[ px/(px^2 + py^2)^(1/2), py/(px^2 + py^2)^(1/2)]
[ -py/(px^2*(py^2/px^2 + 1)), 1/(px*(py^2/px^2 + 1))]];
end
Edits:
project(x(r), x(lmk)) should have been project(x(r), x(lmk_idx)) and is now corrected above.
K goes singular after a little while, but not immediately. I think it's around 20 seconds or so. I'll try the changes @josh suggested when I get home tonight and post the results.
Update 1:
My simulation first observes 2 landmarks, so K is $7\ x\ 2$. (P(rl,rl) * E_rl') * inv( Z ) results in a $5\ x\ 2$ matrix, so it can't be added to x in the next line.
K becomes singular after 4.82 seconds, with measurements at 50Hz (241 steps). Following the advice here, I tried K = (P(:, rl) * E_rl')/Z which results in 250 steps before a warning about K being singular is produced.
This tells me the problem isn't with matrix inversion, but it's somewhere else that's causing the problem.
Update 2
My main loop is (with a robot object to store x,P and landmark pointers):
for t = 0:sample_time:max_time
P = robot.P;
x = robot.x;
lmks = robot.lmks;
mapspace = robot.mapspace;
u = robot.control(t);
m = robot.measure(t);
% Added to show eigenvalues at each step
[val, vec] = eig(P);
disp('***')
disp(val)
%%% Motion/Prediction
[x, P] = predict(x, P, u, dt);
%%% Correction
lids = intersect(m(1,:), lmks(1,:)); % find all observed landmarks
lids_new = setdiff(m(1,:), lmks(1,:));
for lid = lids
% expectation
idx = find (lmks(1,:) == lid, 1);
lmk = lmks(2:3, idx);
mid = m(1,:) == lid;
yi = m(2:3, mid);
[x, P] = expectation(x, P, lmk, yi);
end %end correction
%%% New Landmarks
for id = 1:length(lids_new)
% if id ~= 0
lid = lids_new(id);
lmk = find(lmks(1,:)==false, 1);
s = find(mapspace, 2);
if ~isempty(s)
mapspace(s) = 0;
lmks(:,lmk) = [lid; s'];
yi = m(2:3,m(1,:) == lid);
[x(s), L_r, L_y] = backProject(x(r), yi);
P(s,:) = L_r * P(r,:);
P(:,s) = [P(s,:)'; eye(2)];
P(s,s) = L_r * P(r,r) * L_r';
end
end % end new landmarks
%%% Save State
robot.save_state(x, P, mapspace, lmks)
end
end
At the end of this loop, I save x and P back to the robot, so I believe I'm propagating the covariance through each iteration.
More edits
The (hopefully) correct eigenvalues are now here. There are a number of eigenvalues that are negative. Although their magnitude is never very large, $10^{-2}$ at most, it happens on the iteration immediately after the first landmark is observed and added to the map (in the "new landmarks" section of the main loop).
|
A lot of awesome optics projects like hacking cameras and projectors become possible with CAD lens modelling software1, if we can also easily prototype the lenses we design.
What are some materials and additive or subtractive 3D fabrication strategies that can make a clear lens with strong refraction and the ability to be polished?
1 Here is a helpful list of 37 different lens design & simulation programs.
|
When computing the Jacobian matrix for solving an Inverse Kinematic analytically, I read from many places that I could use this formula to create each of the columns of a joint in the Jacobian matrix:
$$\mathbf{J}_{i}=\frac{\partial \mathbf{e}}{\partial \phi_{i}}=\left[\begin{array}{c}{\left[\mathbf{a}_{i}^{\prime} \times\left(\mathbf{e}_{p o s}-\mathbf{r}_{i}^{\prime}\right)\right]^{T}} \\ {\left[\mathbf{a}_{i}^{\prime}\right]^{T}}\end{array}\right]$$
Such that $a'$ is the rotation axis in world space, $r'$ is the pivot point in world space, and $e_{pos}$ is the position of the end effector in world space.
However, I don't understand how this can work when the joints have more than one DOFs. Take the following as an example:
The $\theta$ are the rotational DOF, the $e$ is the end effector, the $g$ is the goal of the end effector, the $P_1$, $P_2$ and $P_3$ are the joints.
First, if I were to compute the Jacobian matrix based on the formula above for the diagram, I will get something like this:
$$J=\begin{bmatrix}
((0,0,1)\times \vec { e } )_{ x } & ((0,0,1)\times (\vec { e } -\vec { P_{ 1 } } ))_{ x } & ((0,0,1)\times (\vec { e } -\vec { P_{ 2 } } ))_{ x } \\ ((0,0,1)\times \vec { e } )_{ y } & ((0,0,1)\times (\vec { e } -\vec { P_{ 1 } } ))_{ y } & ((0,0,1)\times (\vec { e } -\vec { P_{ 2 } } ))_{ y } \\ ((0,0,1)\times \vec { e } )_{ z } & ((0,0,1)\times (\vec { e } -\vec { P_{ 1 } } ))_{ z } & ((0,0,1)\times (\vec { e } -\vec { P_{ 2 } } ))_{ z } \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 1 & 1
\end{bmatrix} $$
This is assumed that all the rotation axes are $(0,0,1)$ and all of them only have one rotational DOF. So, I believe each column is for one DOF, in this case, the $\theta_\#$.
Now, here's the problem: What if all the joints have full 6 DOFs? Say now, for every joint, I have rotational DOFs in all axes, $\theta_x$, $\theta_y$ and $\theta_z$, and also translational DOFs in all axes, $t_x$, $t_y$ and $t_z$.
To make my question clearer, suppose if I were to "forcefully" apply the formula above to all the DOFs of all the joints, then I probably will get a Jacobian matrix like this:
(click for full size)
But this is incredibly weird because all the 6 columns of the DOF for every joint is repeating the same thing.
How can I use the same formula to build the Jacobian matrix with all the DOFs? How would the Jacobian matrix look like in this case?
|
Am trying to find the right ESC for the following two motors
http://www.e-fliterc.com/Products/Default.aspx?ProdID=EFLM30180MDFA#quickFeatures
http://www.e-fliterc.com/Products/Default.aspx?ProdID=EFLM3032DFA
Can't figure out which of the ESC's listed on the site would be best? Are there alternative (cheaper or better?) options?
|
OK, not really robotics, but has anyone been able to upload to a Rainboduino v3.0 using the Arduino IDE? I can't seem to figure it out, and there is virutally no documentation online. I followed this blog entry, but got no connection to the board.
If anyone can give me some suggestions, I would appreciate it!
|
What's needed to utilize an IMU such as the ArduIMU+ V3 to be used in an INS. Is there any other hardware needed?
|
I'm a highschool student studying electronics and for an assessment task on the history of electronics I have decided to focus on the history of robotics. I want to begin with the earliest possible concept of a robot and progress through major developments in robotics to the current day. Where should I begin my research?
|
What can an Arduino board such as the Uno really do? Of course simple things like controlling a couple servos is very easy for it. However, I don't think an Uno board would be able to preform real-time 3D SLAM from point cloud data gathered from a Kinect sensor on a mobile robot, right? If the robot had any speed at all the Arduino wouldn't be able to keep up, correct? Could it do 2D SLAM while moving and be able to keep up? What about taking 1/10 of the points from the Kinect sensor and processing only those?
Basically, what are some examples of the resource limitations of such an Arduino board?
|
I'm building a small robot using some cheap Vex Robotics tank treads. However, my choice of picking tank treads is almost purely based on the fact that they seem like more fun than wheels. I don't actually know if they really have much of an advantage or disadvantage when compared to wheels.
What are the pros and cons of both wheels and continuous tracks?
|
I've noticed that almost all research being done with helicopter robots is done using quadcopters (four propellers). Why is there so little work done using tricopters in comparison? Or a different number of propellers? What about four propellers has made quadcopters the most popular choice?
|
Lets say I drop a robot into a featureless environment and any magnetic field based sensors (magnetometer/compass) are not allowed.
What methods are there of determining where north is?
Tracking the sun/stars is an option but not reliable enough when the weather is considered.
Can you pick up the rotation of the earth using gyros?
Are there any more clever solutions?
|
I'm trying to find where additional battery capacity becomes worthless in relation to the added weight in terms of a quadcopter. Currently with a 5500 mAh battery, 11.1V, I can get between 12 minutes and 12:30 flight time out of it. My question, then, is this - within the quads lifting capability of course, is there any way to find out where the added weight of a larger battery (or more batteries) cancels out any flight time improvement? Obviously it's not going to be as long as two separate flights, landing and swapping batteries; I'm just trying to maximize my continuous 'in air' time. I'm trying to figure out where the line is (and if I've already crossed it) with tacking bigger batteries onto the quad and seeing diminishing returns. Thanks!
(Again, for now presume that the quad is strong enough to lift whatever you throw at it. With one 5500mAh, ~ 470 grams, my max throttle is about 70%)
|
Do I need a complex system (of gyros, accelerometers etc.) to detect if a robot has moved forward or can I simply use an accelerometer.
I'm building a robot that learns to walk and I need to detect displacement for machine learning. Can I use an accelerometer or will I need a complicated/expensive Internal Navigation System?
|
Typically Mars rovers use wheels and not tracks. I guess Spirit would have better chances getting out of that soft soil should it have tracks. In general, Mars surface structure is not known in advance, so it seems wiser to be prepaired for difficult terrain and so use tracks.
Why do Mars rovers typically use wheels and not tracks?
|
What are the pros and cons of each? Which is better maintained? Which allows for more functionality? Which utilizes the hardware more efficiently? Etc.
|
While experimenting with the OpenCV Machine Learning Library, I tried to make an example to learn the inverse kinematics of a 2D, 2 link arm using decision trees. The forward kinematics code looks like this:
const float Link1 = 1;
const float Link2 = 2;
CvPoint2D32f forwardKinematics(float alpha, float beta)
{
CvPoint2D32f ret;
// Simple 2D, 2 link kinematic chain
ret.x = Link1 * std::cos(alpha) + Link2 * std::cos(alpha - beta);
ret.y = Link1 * std::sin(alpha) + Link2 * std::sin(alpha - beta);
return ret;
}
I generate a random set of 1000 (XY -> alpha) and (XY -> beta) pairs, and then use that data to train two decision tree models in OpenCV (one for alpha, one for beta). Then I use the models to predict joint angles for a given XY position.
It seems like it sometimes gets the right answer, but is wildly inconsistent. I understand that inverse kinematic problems like this have multiple solutions, but some of the answers I get back are just wrong.
Is this a reasonable thing to try to do, or will it never work? Are there other learning algorithms that would be better suited to this kind of problem than decision trees?
|
Could you implement a simple neural network on a microprocessor such as the Arduino Uno to be used in machine learning?
|
We have an optional course in our high-school which is about robotics. We're using the Lego Mindstorms NXT and program it with the original Mindstorms-software.
However, we want to advance and use a major programming-language. We have tried NXC and LeJos. Plus, I tried out the Microsoft Robotics Development Studio, but with all these different possibilities we are a little bit overwhelmed.
Because of that (now it becomes interesting), I want to ask, what technology is the best for NXT and especially: What is easy to use? I don't want to need 14 steps just to compile a program and get it running on the NXT. Also, it would be nice, if it's an extend-able language, like using C#, but are there some better or easier possibilities?
|
I have a manipulator having 4 revolute joints with some movement limitations. So, when I apply inverse kinematics, I'm getting results which are out of limits. Please provide me an algorithm that implements inverse kinematics considering joint limitations.
|
I want to know if its currently possible for a robot to speak by it self as King Robota does, or is just someone speaking on his behalf?
Youtube video
|
According to Wikipedia's article on SLAM, the original idea came from Randal Smith and Peter Cheeseman (On the Estimation and Representation of Spatial Uncertainty [PDF]) in 1986, and was refined by Hugh F. Durrant-Whyte and J.J. Leonard (Simultaneous map building and localization for an autonomous mobile robot) in 1991.
However, neither paper uses the term "SLAM". Where (and when) did that term come from? Was there a particular author or whitepaper that popularized it?
|
I've got a couple Vex 269 Motors hooked up to an Arduino Duemilanove. These motors run a some Vex Tank Treads. I powered the whole setup with an off-brand 9-volt battery. Everything seems to run great, except that it is only able to run for about 30 seconds worth of motor movement. Then the battery quickly isn't able to pump out the energy needed to move the treads and the whole thing quickly slows to being unusable.
What's my problem here? The tank treads seem loose enough that I don't think they're so restricting the motor has to pump out too much energy to move them. There's nothing else being powered except the Arduino and the motors. Is it because this Enercell 9-volt (alkaline) is just a terrible battery choice? Should I only expect that long of battery life for this robot on a 9-volt? Or is there something else I'm missing? Thank you much!
|
I've got a couple Vex 269 Motors hooked up to an Arduino Duemilanove. These motors run a some Vex Tank Treads. The two motors are run as servos on the Arduino using the Servo Library. The problem I'm having is that the two tracks don't turn at the same speed when sent the same servo angle. This is clearly due to the fact that the continuous tracks have so many moving parts that having identical friction forces on each track is hard to get.
How do I get them to move the same speed? Should they be moving the same speed given the same servo angle regardless of the friction and the Vex 269 Motors just aren't strong enough (meaning I should use the Vex 369 or some other more powerful motor)? Is it best to just doing trial and error long enough to figure out which servo angle results in equal speeds on each? Should I tinker with the tracks until they have nearly identical frictions? Thank you much!
|
I know this is a broad statement, but when you've got support for both TCP as well as a full fledged computer on board (to integrate/run an arduino), does this essentially allow for anything that would run on a linux box (raspberryPi) to run and operate your robot?
I know clock speed as well as the dependency libraries for a given code base (on the Pi) would add some complexity here, but what are some of the big issues that I'm overlooking in such a vertically-integrated control system?
Including a RaspberryPi within a robot... Does this allow for a "universal API"?
|
What are the stall and free currents of an electric motor? For example, this Vex motor lists its stall and free currents at the bottom of the page.
I think I understand the general idea, but a detailed description would be helpful.
|
I've found that Arduino (Duemilanove) has a current limit of 40mA per pin. Does this include the Vin pin? Or does the Vin pin have some sort of work around in place on the board to allow for higher currents?
If this is the limit on the Vin, is there good way of using still using the power supply jack on the board and allowing other sources to draw on that supply without it needing to pass through the chip first?
Thank you much.
EDIT: For the second part, what should I do if I wanted to get up to something like 2 amps?
|
How do you program an ESC to have a reverse mode? We're looking to control an ESC from a servo board (for a robotics project).
Assuming that the input will be between 0 and 255, we're looking for 127 as off, 255 as fully forward and 0 as full reverse, so how do we achieve that?
|
I'm building an open-source bio-research hardware (ask me how you can help!) and I've got this guy here:
My big questions are:
Can I get away with all the ground being common? (I've got a 12v and 5v needing to be grounded)
Do I need two sets of capacitors? There are 2 wired up to the 12v regulator and 2 wired to the 5v regulator. (These are shown in blue)
I've generally denoted connections which go UNDER the shield as orange, and those above as green.
If anyone happens to see something which might backfire, feel free to point it out. As this is also my first time making anything quite like this!
I've verified the regulator positions and they are correct.
This is a proto-shield for an Arduino R3 Uno.
A larger version of the image can be seen here: https://i.stack.imgur.com/HI8My.jpg
|
I have a Panda Board ES. I am not able to get it to boot. I sent it back to SVTronics to get it checked and they said that the board is OK; I am the one who is not able to configure it properly.
After doing a little research and following all the directions on the Panda Board and Ubuntu website, I am still not able to get the board to boot. I think the problem is how I am formatting the SD card. I am using disk utility for Mac to format the SD card to "MSDOS(FAT)" partition.
I would like to know how to format an "SD Card" on a Macintosh to install Ubuntu on it for Panda Board ES.
|
For this question assume that the following things are unknown:
The size and shape of the room
The location of the robot
The presence of any obstacles
Also assume that the following things are constant:
The size and shape of the room
The number, shape and location of all (if any) obstacles
And assume that the robot has the following properties:
It can only move forward in increments of absolute units and turn in degrees. Also the operation that moves will return true if it succeeded or false if it failed to move due to an obstruction
A reasonably unlimited source of power (let's say it is a solar powered robot placed on a space station that faces the sun at all times with no ceiling)
Every movement and rotation is carried out with absolute precision every time (don't worry about unreliable data)
Finally please consider the following properties of the robot's environment:
Being on a ceiling-less space station the room is a safe but frustratingly close distance to passing comets, so the dust (and ice) are constantly littering the environment.
I was asked a much simpler version of this question (room is a rectangle and there are no obstacles, how would you move over it guaranteeing you could over every part at least once) and after I started wondering how you would approach this if you couldn't guarantee the shape or the presence of obstacles. I've started looking at this with Dijkstra's algorithm, but I'm fascinated to hear how others approach this (or if there is a well accepted answer to this? (How does Roomba do it?)
|
From what I've seen, LiFePO4 batteries seem like one of the top battery choices for robotics applications. However, I've seen people mentioning that you can't use a charger for a different battery to charge these, but I haven't seen why. If I were to build my own setup to charge LiFePO4 batteries what would it specifically need to do? What kind of voltages or current rates does it need to supply to charge these?
More specifically, I was think about setting up a solar charger for these batteries. Is there any immediate reason why this is a bad solution? Such as, the battery needs to charge with a current above some amount for it to work properly?
If you're ambitious enough to provide an example along with your explanation, I'm specifically thinking of having 4 of these batteries with 2 pairs of 2 in series in parallel.
|
I have heard a lot of claims that manually turning an NXT motor by hand can potentially damage it. I was wondering whether this was at least partially true, and whether there is any evidence to confirm or refute this idea.
I know that some projects (e.g. etch-a-sketch) use the built-in rotation sensor to measure how much the motor has turned, so I was thinking that perhaps whether the motor is idle or set on break is an important distinction, or perhaps there is even a special 'rotation sensor' mode that needs to be switched on in order to prevent damage.
|
I have built a robot from a wheelchair that has worked very well thus far. It is now time for me to take the next step. I need to implement a permanent power circuit with proper protection.
The lowest level of protection I can think of is a fuse, but I would like to take a step further (current/voltage/direction/switches/High/Low voltages). If some one could give some insight on this project of mine any info will be greatly appreciated.
Moderator comment: Please see How do we address questions about related subject areas? before answering. This question is close to the boundary, but is on-topic here.
|
I am working with students (9th & 10th grade) on robotics and wanted to get a good book which covers basic mechanisms. Does anyone have any recommendations. Searching Google or Amazon yields many results, however, I thought the community might have a standard book to use.
|
I recently purchased a 3-axis accelerometer from Amazon, and can't seem to find how it works. I've been looking for quite a while now, and haven't found any real clues. The x, y, and z values always seem to return the same values. They change when I tilt or move the accelerometer, but revert to about 120 for each reading. I am currently using this device with the Arduino Uno, using the following code:
int x=1,y=2,z=3;
void setup() {
pinMode(x, INPUT);
pinMode(y, INPUT);
pinMode(z, INPUT);
Serial.begin(9600);
}
void loop() {
Serial.println();
Serial.print(analogRead(x));
Serial.print(", ");
Serial.print(analogRead(y));
Serial.print(", ");
Serial.print(analogRead(z));
}
Also, how would I go about converting this to tilt?
|
I am trying to calibrate a MEMS accelerometer. I was able to calibrate it for the current axis which is parallel to gravity and shows correctly, 1g. But the other two axes which should be 0.00g are showing +-0.02g instead.
So, e.g., when the accelerometer's x axis is parallel to gravity, it should show (1g, 0g, 0g) and not (1g, 0.02g, -0.01g) like now.
How could I eliminate those values, e.g. further calibrate accelerometer?
EDIT: The acelerometer's datasheet says nothing about calibrating except that The IC interface is factory calibrated for sensitivity (So) and Zero-g level (Off) (page 20).
|
The optimal sampling-based motion planning algorithm $\text{RRT}^*$ (described in this paper) has been shown to yield collision-free paths which converge to the optimal path as planning time increases. However, as far as I can see, the optimality proofs and experiments have assumed that the path cost metric is Euclidean distance in configuration space. Can $\text{RRT}^*$ also yield optimality properties for other path quality metrics, such as maximizing minimum clearance from obstacles throughout the path?
To define minimum clearance: for simplicity, we can consider a point robot moving about in Euclidean space. For any configuration $q$ that is in the collision-free configuration space, define a function $d(q)$ which returns the distance between the robot and the nearest C-obstacle. For a path $\sigma$, the minimum clearance $\text{min_clear}(\sigma)$ is the minimum value of $d(q)$ for all $q \in \sigma$. In optimal motion planning, one might wish to maximize minimum clearance from obstacles along a path. This would mean defining some cost metric $c(\sigma)$ such that $c$ increases as the minimum clearance decreases. One simple function would be $c(\sigma) = \exp(-\text{min_clear}(\sigma))$.
In the first paper introducing $\text{RRT}^*$, several assumptions are made about the path cost metric so that the proofs hold; one of the assumptions concerned additivity of the cost metric, which doesn't hold for the above minimum clearance metric. However, in the more recent journal article describing the algorithm, several of the prior assumptions weren't listed, and it seemed that the minimum clearance cost metric might also be optimized by the algorithm.
Does anyone know if the proofs for the optimality of $\text{RRT}^*$ can hold for a minimum clearance cost metric (perhaps not the one I gave above, but another which has the same minimum), or if experiments have been performed to support the algorithm's usefulness for such a metric?
|
I can control a relay from an Android smartphone using Arduino and Bluetooth as seen here.
However, it seems too costly to be using Arduino and a Bluetooth receiver for driving a switch. As long as Bluetooth is a radio frequency, is it possible to make a simple Bluetooth receiver which can output 1 or 0 to drive a relay? If yes, how tough that is going to be?
The main factor here is the cost, which should be \$1-$5.
|
I'm a long time Java developer who is starting to learn on the Lego Mindstorms NXT 2.0. Are there any limitations to using the Java API? Which language is the most robust on the platform?
I found a post, Which programming language should I use with the NXT? which mentions many of the alternatives. The answer is helpful, but doesn't mention the different languages' limitations.
|
As I see there is a huge price gap between the two \$223 vs \$99 (at amazon).
My intention is to use one of those from Ubuntu linux to perform depth sensing, navigation etc. and naturally I prefer the cheaper.
However I am not sure if I miss some important point while betting on the Kinect for Xbox version.
As it seems the Windows version is overpriced because it has the license for development. Here it is stated that there are internal differences but without exact details (The minimum sensing distance seems to be better for Windows version.).
Could anyone give a comparison chart?
It would be good to know about
Connectivity: USB, special connector, ... .
Hardware differences: are they the same or do they really differ in weight, energy consumption, speed, sensing range, ...?
Driver: could I use Xbox version under Ubuntu?
API usage: could I develop on Xbox version, could I use the same/similar API on both, is the API for Xbox mature enough?
License: is it against the license of Xbox version to develop for home/hobby/educational use?
Thanks.
|
I was wondering what options are there in terms of lightweight (< 5 lbs) robotic arms. I see Robai Cyton Gamma 300, and CrustCrawler AX18 look like interesting options. What lightweight arms do people use/like?
|
Is there an operating system for the Raspberry Pi that is specifically made for running robotics applications? Or an operating system whose purpose is to optimized just to run a few specific programs?
I've been working with an Arduino for a while now. As far as efficiency goes, it makes sense to me to just upload a specific set of commands and have the hardware only need to handle that, and not have to worry about running a full fledged operating system. Is something like this possible to do on a Raspberry Pi?
|
Is there a way to check if a task, function or variable exists in Not eXactly C?
I know that in PHP you can use isset() to check if a variable exists and function_exists() to do the same for a function, but is there a way to do that in NXC?
I am specifically interested in checking whether a task exists or it is alive.
|
I'm currently working on a line-following robot which uses three sensors to follow a black line. The sensors are pretty much in line and right next to each other.
Right now, I'm doing a simple line follow: if on the line go forward, otherwise turn left or right to regain the line. This means that the robot is wiggling along the line most of the time.
I'm looking for a better way for this robot to follow the line using three sensors. The program I'm writing is in Not eXactly C code. I'm trying to get the robot to utilize the power of PID control, but I'm not sure how one would go about writing a three-sensor PID line-follower program in NXC.
|
Mars rovers are typically very slow. Curiosity, for example, has average speed of about 30 meters per hour.
Why is it designed so slow? Is it because of some specific power restrictions or for other reasons? What is the top reason why it is so slow?
|
I have an LM2576 circuit plus an adjuster to adjust the output voltage, for controlling motor speed in a line follower robot. The circuit works great when adjusted to give out low voltages, but when I adjust it to higher voltages for my motors to go faster, it works great for 1-2 minutes, then suddenly cuts down the power and motors start to go extremely slow.
Even when I decrease or increase the output voltage, it won't respond until I turn off the power and turn it back on again. There is something mentioned in the LM2576 datasheet that if we overload the IC it will cut down the power until the load comes lower, so I think it might be a problem with that.
Since this problem has already caused us to lose the competitions with 5+ teams, I would like to solve it for our next competition, so why does our LM2576 circuit suddenly reduce the power?
|
I want to learn robotics and really interested in making a robot based on Kinect sensor.
I see so many projects like this one, and just wondering how it works on top level. I downloaded Kinect SDK and did some basic tutorials, but I just don't think that Microsoft SDK is the library to use for real robotics projects. Any suggestions where to start and what library to use? Any good books in particular or online resources?
|
I'm building a motion control platform with 3 DoF: 1 axis of rotation (theta) and 2 cartesian (x,y). In most applications, like wrist actuation, you have an X-Y stage with a rotating servo as the stage's payload. This configuration works well since little of the power and data wiring needs to transit to the non-linear moving portion of the platform.
For my inverted application, the stackup is reversed. The rotating axis comes first (from the mounting plane) with the stage connected as the rotating platform's payload. Now nearly all of the wiring (power, command, sensor, and otherwise) must be routed to the non-linearly moving section.
I can see two broad approaches:
The inside track, I route the cabling through the center of rotation.
The outside track, I route the cabling around outside the outer diameter of the rotating platform.
Mathematically, I can see that (1) results in minimum cable length, but maximum torsional loading, while (2) results in maximum cable length, but minimum torsional loading on the wires.
Having limited experience with cable routing (and the associated carriers, strategies, and products) in non-linear applications, my question is...
...which approach is better in practice?
Cost isn't really the issue here. I'm more interested in reliability, ease of construction, availability of commercial components (says something about the popularity of the technique), etc...
e.g. the generic concepts behind why you pick one over the other.
...of course, if you have some part numbers for me I wouldn't be upset <-- I know I'm not supposed ask that here ;-)
|
When using an EKF for SLAM, I often see the motion and measurement models being described as having some noise term.
This makes sense to me if you're doing a simulation, where you need to add noise to a simulated measurement to make it stochastic. But what about when using real robot data? Is the noise already in the measurement and thus does not need to be added, or does the noise matrix mean something else?
For example, in Probabilistic Robotics (on page 319), there is a measurement model: $z_t^i = h(y,j) + Q_t$, where $Q_t$ is a noise covariance. Does $Q_t$ need to be calculated when working with real data?
|
Is there anything different between a iRobot Roomba and the Create? I want go start building my own turtlebot and playing with ROS but with the cost of all the parts I'm going to have to do it piece by piece. It's pretty easy to find cheap used Roombas.
|
The interesting Kilobot project from Harvard for investigating multi-robot behavior with masses of small dumb robots has been made open hardware for a year now.
However I cannot find so much activity about robot creation and movies about results. Is it too hard to create the robots, the programmer, the charger or isn't the project interesting enough?
|
I'm a software engineer who volunteers with a non-profit that introduces young girls to technology. We have recently been talking about methods of introducing these children to the world of robotics, and I am curious what types of low-cost options we have.
One very appealing idea would be to have an online simulator, or (more preferable) an off-line standalone-simulator that we can build and program simple robots with. Perhaps nothing more than dragging components together, and then programming the interactions between those components.
What solution(s) exist that I might be able to make use of in our outreach?
|
I recently asked a question about the juniper WiFi shield, and am now working with wifly from spark fun. I've been using an updated version of their experimental library, and have been attempting to set up a webserver. Unfortunately, when I attempt to connect through a web browser, I get an error saying that the page sent no data. Here's my code:
#include <SPI.h>
#include <WiFly.h>
WiFlyServer s(80);
boolean current_line_is_blank=true;
void setup() {
Serial.begin(9600);
WiFly.begin();
if(!WiFly.join(placeholderssid, placeholderpass,WPA_MODE)) {
Serial.println("Connection Failed.");
} else {
Serial.println("Connection Succesful!");
Serial.println(WiFly.ip());
Serial.println("Receving Client Input...");
s.begin();
}
}
void loop() {
WiFlyClient c = s.available();
if(c) {
Serial.println("Server Ready.");
current_line_is_blank=true;
while(c.connected()) {
Serial.println("Client Connected.");
if(c.available()) {
Serial.println("Client Available for data.");
char tmp = c.read();
Serial.println(tmp);
if(tmp == '\n' && current_line_is_blank) {
Serial.println("Sent OK Response.");
c.println("HTTP/1.1 200 OK");
c.println("Content-Type: text/html");
c.println();
c.print("WiFly Webserver Running!");
c.println("<br />");
break;
}
if (tmp == '\n') {
// we're starting a new line
current_line_is_blank = true;
} else if (tmp != '\r') {
// we've gotten a character on the current line
current_line_is_blank = false;
}
}
}
}
delay(2000);
c.stop();
}
I am using Arduino Uno, and the serial monitor looks like this:
Connection Succesful!
10.100.1.173
Receving Client Input...
Is there anything obviously wrong with my code?
EDIT:
I now have a new shield, but I'm still working with the same problem. Is it a malfunction in the hardware? I just can't figure this out!
|
I was working on a project to make a bedside night light out of a stuffed butterfly or bird. I was making a mechanism to make the wings flap with a servo motor and some small gears. The servo motor was very loud as it moved. And this was whether or not the servo was moving large amounts, small amounts, fast or slow.
I've worked with small servos before and realized they usually are pretty noisy machines, but I can't really explain why.
Why are small servo motors noisy when they move? Is it backlash in the internal gearing?
|
I'm currently building a robot with four legs (quadruped), 3 DOF (Degrees of Freedom) and Its been suggested here that I use a simulator to do the learning on a computer and then upload the algorithms to the robot. I'm using an Arduino Uno for the robot and what software could I use to simulate the learning and then be able to upload to the Arduino board?
|
Often when I need to perform model fitting I find myself looking for a decent C++ library to do this. There is the RANSAC implementation in MRPT, but I was wondering if there are alternatives available.
To give an example for the type of problems I would like to solve: For a set $A$ of (approx 500) 3D point pairs $(a, b)$ I would like to find the Isometry transform $T$, which maps the points onto each other so that $|(a - Tb)| < \epsilon$. I would like to get the largest subset of $A$ for a given $\epsilon$. Alternatively I guess I could have the subset size fixed and ask for the lowest $\epsilon$.
|
How do we know that an object is contained inside another object or is just lying on top of it?
Lets take an example of a cup-plate-spoon. The cup is lying on top of the plate. But the spoon is inside the cup. How do we distinguish between the 2 situations? What are the criteria to decide whether A is contained inside B or just lying above B?
I am trying to solve it using kinect.
|
There is a lot of background here, scroll to the bottom for the question
I am trying out the map joining algorithm described in How Far is SLAM From a Linear Least Squares Problem; specifically, formula (36). The code I have written seems to always take the values of the second map for landmark positions. My question is, am I understanding the text correctly or am I making some sort of error. I'll try to explain the formulas as I understand them and show how my code implements that. I'm trying to do the simple case of joining just two local maps.
From the paper (36) says joining two local maps is finding the a state vector $X_{join,rel}$ that minimizes:
$$
\sum_{j=1}^{k}(\hat{X_j^L} - H_{j,rel}(X_{join,rel}))^T(P_j^L)^{-1}(\hat{X_j^L} - H_{j,rel}(X_{join,rel}))
$$
Expanded for two local maps $\hat{X_1^L}$ and $\hat{X_2^L}$ I have:
$$
(\hat{X_1^L} - H_{j,rel}(X_{join,rel}))^T(P_1^L)^{-1}(\hat{X_1^L} - H_{j,rel}(X_{join,rel})) + (\hat{X_2^L} - H_{j,rel}(X_{join,rel}))^T(P_2^L)^{-1}(\hat{X_2^L} - H_{j,rel}(X_{join,rel}))
$$
As I understand it, a submap can be viewed as an integrated observation for a global map, so $P^L_j$ is noise associated with the submap (as opposed to being the process noise in the EKF I used to make the submap, which may or may not be different).
The vector $X_{join,rel}$ is the pose from the first map, the pose from the second map and the union of the landmarks in both maps.
The function $H_{j,rel}$ is:
$$
\begin{bmatrix} X_{r_{je}}^{r_{(j-1)e}}\\
\phi_{r_{je}}^{r_{(j-1)e}}\\
R(\phi_{r_{(j-1)e}}^{r_{m_{j1}e}})
(X^{r_{m_{j1}e}}_{f_{j1}} -
X^{r_{m_{j1}e}}_{r_{(j-1)e}})\\.\\.\\.\\
R(\phi_{r_{(j-1)e}}^{r_{m_{jl}e}})
(X^{r_{m_{jl}e}}_{f_{jl}} -
X^{r_{m_{jl}e}}_{r_{(j-1)e}})\\
X_{f_{j(l+1)}}^{r_{j-1e}}\\
.\\.\\.\\
X_{f_{jn}}^{r_{j-1e}}
\end{bmatrix}
$$
I'm not convinced that my assessment below is correct:
The first two elements are the robot's pose in the reference frame of the previous map. For example, for map 1, the pose will be in initial frame at $t_0$; for map 2, it will be in the frame of map 1.
The next group of elements are those common to map 1 and map 2, which are transformed into map 1's reference frame.
The final rows are the features unique to map 2, in the frame of the first map.
My matlab implementation is as follows:
function [G, fval, output, exitflag] = join_maps(m1, m2)
x = [m2(1:3);m2];
[G,fval,exitflag,output] = fminunc(@(x) fitness(x, m1, m2), x, options);
end
function G = fitness(X, m1, m2)
m1_f = m1(6:3:end);
m2_f = m2(6:3:end);
common = intersect(m1_f, m2_f);
P = eye(size(m1, 1)) * .002;
r = X(1:2);
a = X(3);
X_join = (m1 - H(X, common));
Y_join = (m2 - H(X, common));
G = (X_join' * inv(P) * X_join) + (Y_join' * inv(P) * Y_join);
end
function H_j = H(X, com)
a0 = X(3);
H_j = zeros(size(X(4:end)));
H_j(1:3) = X(4:6);
Y = X(1:2);
len = length(X(7:end));
for i = 7:3:len
id = X(i + 2);
if find(com == id)
H_j(i:i+1) = R(a0) * (X(i:i+1) - Y);
H_j(i+2) = id;
else % new lmk
H_j(i:i+2) = X(i:i+2);
end
end
end
function A = R(a)
A = [cos(a) -sin(a);
sin(a) cos(a)];
end
I am using the optimization toolbox to find the minimum of the fitness function described above. The fitness function itself is pretty straightforward I think. The function H returns the vector H described above.
The result is:
When I run join_maps on the two vectors
map_1 = [3.7054;1.0577;-1.9404; %robot x, y, angle
2.5305;-1.0739;81.0000]; % landmark x, y, id
map_2 = [3.7054;1.0577;-1.9404;
2.3402;-1.1463;81.0000]; % note the slightly different x,y
[G,fv,output,exitflag] = join_maps(map_1, map_2)
The output is:
Warning: Gradient must be provided for trust-region algorithm;
using line-search algorithm instead.
> In fminunc at 341
In join_maps at 7
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
<stopping criteria details>
Local minimum possible.
fminunc stopped because it cannot decrease the objective function
along the current search direction.
<stopping criteria details>
G =
3.7054
1.0577
-1.9404
3.7054
1.0577
-1.9404
2.3402
-1.1463
81.0000
fv =
1.3136e+07
output =
iterations: 1
funcCount: 520
stepsize: 1.0491e-16
firstorderopt: 1.6200e+05
algorithm: 'medium-scale: Quasi-Newton line search'
message: [1x362 char]
exitflag =
5
The question:
My program gives map 2 is the minimum of the map joining function. It seems like the minimum should be somewhere between map 1 and map 2. I'm pretty sure the problem is with the matrix H. What am I doing wrong?
|
I am working on a quadrotor. I know its position -- $a$, where I would like to go -- target position $b$, and from that I calculate a vector $c$ -- a unit vector that will take me to my target:
c = b - a
c = normalize(c)
Since a quadrotor can move in any direction without rotation, what I have tried to do is
rotate $c$ by the robots yaw angle
split it into its $x, y$ components
pass them to the robot as roll and pitch angles.
The problem is that if the yaw is 0° ±5 then this works, but if the yaw is near +90 or -90 it fails and steers to wrong directions. My question is am I missing something obvious here?
|
Have you ever seen one those video games that has headset/goggles you stand in and look around the virtual scene with? I'm building one of those, and I'm trying to design a simple controller. I need the output of the controller to emulate a mouse input. So if you look to the left, it's as if you were moving the mouse to the left. Supposing I use optical encoders, the pan and tilt will need to be in separate locations (a couple of inches apart). It seems that many mouse hacks online have the components very close together.
Do you think it's possible to have one of the encoders some distance away from the controller chip? For OEM purposes, is there a good mouse controller chip that will output USB protocol mouse movements that I could buy in bulk?
Many thanks for any suggestions. Cheers
|
When you've created a map with a SLAM implementation and you have some groundtruth data, what is the best way to determine the accuracy of that map?
My first thought is to use the Euclidean distance between the map and groundtruth. Is there some other measure that would be better? I'm wondering if it's also possible to take into account the covariance of the map estimate in this comparison.
|
I'm building a hobby 6-DOF robotic arm and am wondering what the best way is to communicate between the processors (3-4 AVRs, 18 inches max separation). I'd like to have the control loop run on the computer, which sends commands to the microprocessors via an Atmega32u4 USB-to-??? bridge.
Some ideas I'm considering:
RS485
Pros: all processors on same wire, differential signal more robust
Cons: requires additional chips, need to write (or find?) protocol to prevent processors from transmitting at the same time
UART loop (ie, TX of one processor is connected to RX of next)
Pros: simple firmware, processors have UART built in
Cons: last connection has to travel length of robot, each processor has to spend cycles retransmitting messages
CANbus (I know very little about this)
My main considerations are hardware and firmware complexity, performance, and price (I can't buy an expensive out-of-box system).
|
I've got a tread-driven robot, with low precision wheel encoders for tracking distance and an electronic compass for determining heading. The compass has significant (> 1 second) lag when the robot turns quickly, e.g. after reaching a waypoint — pivoting in place to point to its new heading.
What are ways for dealing with the lag? I would think one could take a lot of measurements and model the compass response. However, this seems problematic since it's rate-dependent and I don't know the instantaneous rate.
As a simple-but-slow approach, I have the robot turn until it's very roughly pointed in the right direction, then make very small incremental turns with brief measurement pauses until it's pointed the right way. Are there other ways of dealing with this?
|
I've seen this question, which asks about determining the process noise for an EKF. I don't see anything there about pre-recorded data sets.
My thought on how to determine the noise parameters, assuming ground truth is available, would be to run the data several times with the EKF and minimize the mean square error, while varying the noise parameters.
Is this an acceptable way to determine noise for a pre recorded data set? Are there better (or just other) ways from determining the optimal noise values based just on the data set?
|
I'm trying to power 7-12 servos, and I was under the impression that each one would need about an amp, but in looking around for an appropriate BEC to supply them, I notice that most seem to output around 1-3.5 amps.
They won't all be running at once, but often, say 4 will be drawing enough juice to move.
Obviously, I'm missing some link in my understanding. How do I determine how many amps will be needed from the power supply?
|
I'm trying to program advanced functions in RobotC but I'm not too sure I'm doing it right. I want to specify the motor port I'm using, but I assigned names to all the motors. Funny thing though, they don't exactly work the same as regular variables.
For instance, motor[port7]'s alternate name is light_blue.
#pragma config(Motor, port7, light_blue, tmotorVex393, openLoop)
I'm not really sure if these are new variables, or just specifications. Anyway, here is the variable's signature:
int motor[tMotor motor]
My code plans on doing something similar to this:
void testThing (Motor motorName)
{
motorName = someValue;
}
testThing(light_blue);
But with the int/motor hybrid variable/unidentified I'm not sure how well that would work out. Or at all.
|
I would like a high torque motor (37 oz-in @ 5760 rpm) for souping up a Scorbot 3 I bought. I really need it to have an encoder to count the number of revolutions and to allow high start-up torque. So far, I'm having difficulty finding a suitable motor.
The closest I've found are:
Revolver S Stubby
(still not ready for purchase)
Team Novak Ballistic 25.5T
I've found other RC car motors, but they are usually too big.
Some alternatives I thought about are:
adding hall sensors to an existing motor - how hard is this?
rewinding a motor with more turns to increase torque (decrease Kv)
Does anybody know of any motors that fit these requirements or modifications I can make to existing ones?
Update: I had almost given up hope, until someone at Homebrew Robotics suggested using the Maxon motor finder.
If you just type in my given torque and speed, it returns 3 motors, but they're all over powered because the search interprets your specs as a continuous operating point, whereas my robot will only need that much power 20% of the time, and maybe for 1 second max.
If I type in 12V, 5000rpm, and 15 oz-in, then it returns 2 brushless motors, of which, the Motor EC 45 is the best fit, which has this operating curve:
http://msp.maxonmotor.com/camosHtml/i?SIG=fb9a5d91198caf381122a3d6eab8b1bda3877f30_fa_1e0.png
However, I don't want to pay what Maxon is charging, so instead, I've contacted the guy who makes the yet to be released Revolver Stubby and he has kindly offered to build a custom high torque, low RPM motor for me.
Can anyone comment on why high torque, low RPM motors like the one I want seem so rare? Is due to lack of applications (robotics) or is there some intrinsic difficulty in making them?
|
I'm familiar with the idea of the uncanny valley theory in human-robot interaction, where robots with almost human appearance are perceived as creepy. I also know that there have been research studies done to support this theory using MRI scans.
The effect is an important consideration when designing robotic systems that can successfully interact with people. In order to avoid the uncanny valley, designers often create robots that are very far from humanlike. For example, many therapeutic robots (Paro, Keepon) are designed to look like animals or be "cute" and non-threatening.
Other therapeutic robots, like Kaspar, look very humanlike. Kaspar is an excellent example of the uncanny valley, since when I look at Kaspar it creeps me out. However, people on the autism spectrum may not experience Kaspar the same way that I do. And according to Shahbaz's comment, children with autism have responded well to Kaspar.
In the application of therapeutic robots for people on the autism spectrum, some of the basic principles of human-robot interaction (like the uncanny valley) may not be valid. I can find some anecdotal evidence (with Google) that people on the autism spectrum don't experience the uncanny valley, but so far I haven't seen any real studies in that area.
Does anyone know of active research in human-robot interaction for people on the autism spectrum? In particular, how does the uncanny valley apply (or doesn't it apply) when people on the autism spectrum interact with a humanlike robot?
|
I got the following homework question:
What are the general differences between robots with Ackermann steering and standard bicycles or tricycles concerning the kinematics?
But, I don't see what differences there should be, because a car-like robot (with 2 fixed rear wheels and 2 dependent adjustable front wheels) can be seen as a tricycle-like robot (with a single adjustable front wheel in the middle).
Then, if you let the distance between the two rear wheels approach zero, you get the bicycle.
So, I can't see any difference between those three mobile robots. Is there something I am missing?
|
I am most familiar with SLAM maps that are point clouds, usually in the form of a vector like $<x,y,\theta,f_{1x},f_{1y},...,f_{nx},f_{ny}>$. I also understand how to create a map like this using an EKF.
Today I came across a .graph file format, which as you would expect consists of vertices and edges in the format:
VERTEX2 id x y orientation
EDGE2 observed_vertex_id observing_vertex_id forward sideward rotate inf_ff inf_fs inf_ss inf_rr inf_fr inf_sr
I know that there's a connection between matrices and graphs (an adjacency matrix for example). But it's not clear to me how this graph format of a map is equivalent to a point cloud map that I'm familiar with.
What is the relationship? Are the vertices both poses and landmarks? Are they in a global reference frame? How is this created from say velocity information and a range/bearing sensor? Is there a transformation between a graph map and a point cloud?
|
I'm using my own code to create a quadcopter robot. The hardware part is done but I need to balance the copter.
The original video demonstrating the problem was shared via dropbox and is no longer available.
I have tried to play with the speed of each motor to get it balanced. It didn't go.
I actually have a gyro and accelerometer onboard. But how shall I adjust the motor speed based on these values? What are the rules that I should beware of?
Is there any better solution other that try and error? Where shall I begin? Any tips?
|
I would like to design a robotic arm to hold a weight X at length Y (in my case I want to hold X=2.5 lbs at Y = 4 inches). Starting out simply, I would like try building an arm with a gripper plus one servo joint.
[Servo Joint] ----- Y ------ [Gripper]
When designing an arm, would I want to say that the gripper has to have enough torque to hold the desired weight (e.g. 2.5 lbs) at a minimal distance (however long the fingers are) then design the servo joint to bear the weight of the gripper + the load?
I would like to be able to hold the object at full extension
|
I am trying to build a semi-analog timer. Something like those old egg timers that you rotate the face of. I want a knob that I can turn that can be read by a microcontroller, and I also want the microcontroller to be able to position the knob. I'd like to implement "stops" by letting the microcontroller push the knob towards certain positions. As it runs down, the knob should turn. This is my first project of this kind; I've built small robots in the past, but it's been many years.
I've considered hacking a servo motor to read its position, but the small hobby servos I've tried are too hard to turn, very noisy, and pick up too much momentum when turned. They don't act like a good knob.
I'm now considering a rotary encoder connected to a motor, but after hunting at several sites (SparkFun, ServoCity, DigiKey, Trossen, and some others), I haven't been able to find anything that seemed appropriate. I'm not certain how to find a motor that's going to have the right kind of low torque.
This seems like it shouldn't be a really uncommon problem. Is there a fairly normal approach to creating a knob that can be adjusted both by the user and a microcontroller?
|
The fact is that the more I search the less I find autonomous (real) robots in use. The companion robots are all toys with limited useless functionality. Whenever there is a natural disaster you don’t see operational search and rescue robots in the news. Even military robots in service are all remotely controlled machines. They are not intelligent machines. Industrial robotic arms are deterministic machines. The only robots with some levels of autonomous functionality are cleaning bots, warehouse operations bots and farming robots.
On the other hand, today:
the artificial intelligence algorithms are very good in making decisions
the sensing technologies are very sophisticated
the communication technologies are very fast
we can manufacture cheap parts
people are extremely gadget savvy
So, why there is no real robot in our day to day life? No investment in the domain? No market yet? Not enough knowledge in the domain? A missing technology? Any idea?
|
I am looking for a good embedded PC to run ROS on. I recently came across a couple of little machines using new very multi-core processors, such as the Epiphany and the XMOS.
Since the one thing that ROS really seems to want is cores, would ROS be able to take advantage of all of these cores? Or are they all just too feeble with too little RAM to be of any use?
Would it make more sense to focus on machines with fewer, more powerful cores?
|
I was wondering, do we have real nano bots, like the ones in the movies?
I think we have bots which can move through the blood vessels, am I right?
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.