instruction
stringlengths 40
28.9k
|
---|
Im working on a robot that needs image processing to analyze data recieves from cameras.
As i searched for ARM and AVR libraries i found that there is no dip library for these micros and their limited ram is hard for image data process. i want to know is there any hardware that connects to a win or android or... devices and make it possible to that device connect to actuators and sensors?
thank you or helping.
|
Given three sets of joint angles in which the end effector is in the same position, is it possible to find the DH parameters?
If the robot has 2 DOF in shoulder, 2 DOF in elbow, and 1 dof in wrist, with DH parameters as upper arm length, elbow offset in 1 axis, lower arm length, can this be solved, if so how?
I tried iterating through DH parameters to minimize position of end effector with forward kinematics, but this doesnt seem to work as DH parameters of 0 for everything makes 0 minimal distance.
Reason for this; given a physical robot, no DH parameters are known, and measuring by hand is not accurate.
|
I have Arduino talk with Create 2 via serial interface. But before sending commands to the robot, I have to power it on by manually pushing the power button on the robot. How to make the robot turned on via that mini din 7 port, instead of pushing that power button?
I notice when plugin iRobot serial-2-USB cable into that port on the robot, the robot is immediately turned on, ready for received the first command (command 128), so apparently there is way to turn on the robot via that port.
|
I have 28 pmac motors (3-ph, 230 volt of 0.5 kw, 1.05kw an 1.21kw) in motor control center. Please suggest a time staggered switching scheme in order to avoid tripping due to voltage sag, swell , flicker etc
|
Can anybody help figure out HD parameters for the case where two links with a revolute joint are in the same plane, thus that the variable angle is 0, but the twist is not 0. This is a simple drawing. I think that x-axis that is perpendicular to both z-axis, points away and goes through the intercection of z-axis. The link length is 0, the twist is a and the offset is d. Whould it be correct?
Thanks.
|
as i dont know that what kind of radiation animals emit. as humans emit IR radiations so PIR sensors help to identify humans. pls suggest me if someone have knowledge about sensors which detects animals.
|
I have a quadcopter controlled through KK2.1.5 flight controller. I have been flying with it without problems, but now i am facing a problem. When i start and arm the kk2.1.5, by giving throttle it starts turning towards some direction with acceleration. I double checked the motor pins locations and all other things, they are correct. When i took a look to the gyro bubble of kk2.1.5 it wasnt at mid of the crosshair. I turned the quad off and then on. I check bubble again it was at centre. now again when i gave it throttle it started turning towards some direction. I checked the again, and it wasnt on centre this time too. So at the armed state the gyro bubble moves away from centre by giving throttle. due to which quad overcorrects itself. Now i have understood that the gyro is off centre due to vibration of the FC. What should i do to antivibrate it. What material should i keep in between so that vibrations are almost zero.
|
In most papers about IBVS the camera velocity is computed and then used as a pseudo-input for the manipulator. (e.g. this one) Is there any work in which the dynamic Lagrange model $H(q) \ddot q +C(q,\dot q)\dot q+g(q)=\tau$ of the manipulator is taken into consideration in order to compute the torque required to move the joints accordingly?
|
I tried this code and it worked:
void loop()
{
int y = 104;
int x2 = vertical2.currentPosition();
int z2 = y-x2;
int x1 = horizontal2.currentPosition();
int z1 = y-x1;
horizontal2.moveTo(z1);
horizontal2.run();
vertical2.moveTo(z2);
vertical2.run();
}
However, the problem is that when the above code was placed inside a loop such as if loop, it was not working. Can anyone help me solve this problem? I am using accelStepper library for the code above.
void loop()
{
int dummy=1;
if(dummy==1)
{
int y = 104;
int x2 = vertical2.currentPosition();
int z2 = y-x2;
int x1 = horizontal2.currentPosition();
int z1 = y-x1;
horizontal2.moveTo(z1);
horizontal2.run();
vertical2.moveTo(z2);
vertical2.run();
}
}
|
I'm trying to implement direct-multiple shooting method to my problem.
Objective function: tf
constraints : q<q_max
v<v_max (v=dq/dt)
a<a_max
tau<tau_max (tau=M(q)a+B(q,v)+G(q))
C(q)=r_0-|P-P_0| (obstacle avoidance)
Initial condition q(0)=q_0 (q_0 is given)
q(t_f)=q_f (q_f is given) and
v(0)= 0
v(t_f)=0
As I understand from the theory, I have to divide the variables as state variables and control variables.
State variables are: q and v
Control variable is: tau
In each time interval I'll generate cubic splines which are q(t)=a_0+a_1*t+a_2*t^2+a_3*t^3
Could you help me how I will implement it? I don't understand what is the ODE here and how I should construct the algorithm?
Are there any example about it?
edit to make the equations clear I'll rewrite them here again:
based on the link
state variables:
x1(t) = (q1(t) , ··· ,qn(t))^T and x2(t) = (q˙1(t) , ··· ,q˙n(t))^T. and derivatives of the state variables are equal to x˙(t) = f(x(t) ,u(t)) where f is
f(x(t), u(t)) = ((q˙1(t), . . . , q˙5(t))^T;
M(x(t))−1· (u(t) − N(x(t)))
I don't know how to insert cubic polynomials in that equation system and how to solve ODE Will it be like [T,X]=ode45('f', [0 t_f], [q_0 q_f])
|
I am currently trying to implement an Inverse Kinematics solver for Baxter's arm using only 3 pitch DOF (that is why the yGoal value is redundant, that is the axis of revolution). I for the most part copied the slide pseudocode at page 26 of http://graphics.cs.cmu.edu/nsp/course/15-464/Fall09/handouts/IK.pdf .
def sendArm(xGoal, yGoal, zGoal):
invJacob = np.matrix([[3.615, 0, 14.0029], [-2.9082, 0, -16.32], [-3.4001, 0, -17.34]])
ycurrent = 0
while xcurrent != xGoal:
theta1 = left.joint_angle(lj[1])
theta2 = left.joint_angle(lj[3])
theta3 = left.joint_angle(lj[5])
xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3)
xIncrement = xGoal - xcurrent
zIncrement = zGoal - zCurrent
increMatrix = np.matrix([[xIncrement], [0], [zIncrement]])
change = np.dot(invJacob, increMatrix)
left.set_joint_positions({lj[1]: currentPosition + change.index(0)/10}) #First pitch joint
left.set_joint_positions({lj[3]: currentPosition + change.index(1)/10}) #Second pitch
left.set_joint_positions({lj[5]: currentPosition + change.index(2)/10}) #Third Pitch joint
def forwardKinematics(theta1, theta2, theta3):
xcurrent = 370.8 * sine(theta1) + 374 * sine(theta1+theta2) + 229 * sine(theta1+theta2+theta3)
zcurrent = 370.8 * cos(theta1) + 374 * cos(theta1+theta2) + 229 * cos(theta1+theta2+theta3)
return xcurrent, zcurrent
Here is my logic in writing this:
I first calculated the Jacobian 3x3 matrix by taking the derivative of each equation seen in the forwardKinematics method, arriving at:
[370cos(theta1) + 374cos(theta1+theta2) .....
0 0 0
-370sin(theta1)-374sin(theta1+theta2)-...... ]
In order to arrive at numerical values, I inputted a delta theta change for theta1,2 and 3 of 0.1 radians. I arrived at a Jacobian of numbers:
[0.954 0.586 .219
0.0000 0.000 0.0000
-.178 -.142 -0.0678]
I then input this matrix into a pseudoinverse solver, and came up with the values you see in the invJacob matrix in the code I posted. I then multiplied this by the difference between the goal and where the end effector is currently at. I then applied a tenth of this value into each of the joints, to make small steps toward the goal. However, this just goes into an infinite loop and my numbers are way off what they should be. Where did I go wrong? Is a complete rewrite of this implementation necessary? Thank you for all your help.
|
I am having a thesis right now regarding a robot. My research requires the robot to be attached to linear guide rail. A robot has to detect human in a very close range (of about 2 meters distance). What easiest and efficient method or components shall I use?
|
i am using this sensor to make self balancing robot.At first i have soldered the header(only to vcc,gnd,scl,sda ) on the imu borad at the opposite side where there is no component mounted.then connecting it to arduino uno r3(vcc to vcc 3.3v/5v,gnd to 1 of 3 gnd,scl to scl and sda to sda(first time at those next to AREF, second time A5,A4) ) i uploaded the sketch https://github.com/adafruit/Adafruit_ADXL345/blob/master/examples/sensortest/sensortest.pde then when i opened the serial monitor i got
Accelerometer Test
FF Ooops, no ADXL345 detected ... Check your wiring!
i thought may be i have soldered the header in wrong direction(as in picture and videos at internet,they are so) so i desolder(with solder iron,no other technique) the header,but there were still some solder around the hole which i could not remove.then while checking the continuity between pins with multimiter(in resistance mode) i found the resistance to be 20k(scl-sda),220k(scl-gnd),220k(sda-gnd),between vcc and 3 other pins multimieter shows 1(range 2000k). then i soldered it on opposit side(this time where other components are mounted).the serial monitor still shows same output,and so does the muiltimeter.so where is the problem? is it with soldering ?do i need to disolder the header again and clean left out solder(with Chip Quik type desoldering technique ) on the opposite side(no component mounted)?is there any hope that i won't need to buy it again?
picture of opposite side where no component is mounted and this is after desoldering and resoldering
|
I've recently been trying to use Gazebo to do some modelling for a couple tasks. I have a robot that's effectively able to locate a ball and get x,y coordinates in terms of pixels using a simple RGB camera from the Kinect. I also have a point cloud generated from the same Kinect, where I hope to find the depth perception of the ball using the X,Y coords sent from the circle recognition from my RGB camera. My plan earlier was to convert the X,Y coordinates from the RGB camera into meters using the DPI of the Kinect, but I can't find any info on it. It's much, much harder to do object recognition using a Point Cloud, so I'm hoping I can stick to using an RGB camera to do the recognition considering it's just a simple Hough Transform. Does anybody have any pointers for me?
|
How would you vertically tilt a camera 180 degrees using mirrors?
I'm trying to add a pan/tilt mechanism to a Raspberry Pi's camera. The camera uses one of those flat cables with unstranded wires, and even with a strain gauge, I don't trust it to handle repeated bending, so I'm trying to design a tilt mechanism that allows the camera to be rigidly mounted so no wires move. The tilting also has to happen very quickly, so I'm trying to minimize the amount of mass I need to move.
Then I saw the Oculus kit that actuates a mirror to effectively tilt a laptop's fixed webcam. I'm trying to extend this idea, but I having trouble working out the mechanics that would allow the tilt to extend to 180 degrees. The layout in the Oculus's mechanism only supports a tilt angle of about 90 degrees, and the mirrors are relatively large. Is it possible to modify this to support 180 degrees?
Are there other ways to "bend" the view of a camera without having to move the actual camera?
|
Can a one propeller drone work efficiently for a good flight and stable camera footage in a drone flight
|
I would like to clarify my self on singularity configurations. If I am moving the robot in joint space only one joint at a time, can I come to a singular configuration? If so how?
Thanks
|
For a mod on the Dynamixel RX-24F I need to remove the enclosed PCB. I removed all screws but the PCB doesn't come out easily (without applying more force than I'm comfortable with). It seems to be stuck on the three large solder points in the white area. Has someone experience with this particular servo?
It might be glued/soldered to the case, but I'm not quite certain. Any help is appreciated.
|
I asked a question similar to this earlier, but I believe I have a new problem. I've been working on figuring out the inverse kinematics given an x,y,z coordinate. I've adopted the Jacobian method, taking the derivative of the forward kinematics equations with respect to their angles and input it into the Jacobian. I then take the inverse of it and multiply it by a step towards the goal distance. For more details, look at http://www.seas.upenn.edu/~meam520/notes02/IntroRobotKinematics5.pdf page 21 onwards.
For a better picture, below is something:
Below is the code for my MATLAB script, which runs flawlessly and gives a solution in under 2 seconds:
ycurrent = 0; %Not using this
xcurrent = 0; %Starting position (x)
zcurrent = 0; %Starting position (y)
xGoal = .5; %Goal x/z values of (1, 1)
zGoal = .5;
theta1 = 0.1; %Angle of first DOF
theta2 = 0.1; %Angle of second DOF
theta3 = 0.1; %Angle of third DOF
xchange = xcurrent - xGoal %Current distance from goal
zchange = zcurrent - zGoal
%Length of segment 1: 0.37, segment 2:0.374, segment 3:0.2295
while ((xchange > .02 || xchange < -.02) || (zchange < -.02 || zchange > .02))
in1 = 0.370*cos(theta1); %These equations are stated in the link provided
in2 = 0.374*cos(theta1+theta2);
in3 = 0.2295*cos(theta1+theta2+theta3);
in4 = -0.370*sin(theta1);
in5 = -0.374*sin(theta1+theta2);
in6 = -0.2295*sin(theta1+theta2+theta3);
jacob = [in1+in2+in3, in2+in3, in3; in4+in5+in6, in5+in6, in6; 1,1,1];
invJacob = inv(jacob);
xcurrent = .3708 * sin(theta1) + .374 * sin(theta1+theta2) + .229 * sin(theta1+theta2+theta3)
zcurrent = .3708 * cos(theta1) + .374 * cos(theta1+theta2) + .229 * cos(theta1+theta2+theta3)
xIncrement = (xGoal - xcurrent)/100;
zIncrement = (zGoal - zcurrent)/100;
increMatrix = [xcurrent; zcurrent; 1]; %dx/dz/phi
change = invJacob * increMatrix; %dtheta1/dtheta2/dtheta3
theta1 = theta1 + change(1)
theta2 = theta2 + change(2)
theta3 = theta3 + change(3)
xcurrent = .3708 * sin(theta1) + .374 * sin(theta1+theta2) + .229 * sin(theta1+theta2+theta3)
zcurrent = .3708 * cos(theta1) + .374 * cos(theta1+theta2) + .229 * cos(theta1+theta2+theta3)
xchange = xcurrent - xGoal
zchange = zcurrent - zGoal
end
Below is my Python code, which goes into an infinite loop and gives no results. I've looked over the differences between it and the MATLAB code, and they look the exact same to me. I have no clue what is wrong. I would be forever grateful if somebody could take a look and point it out.
def sendArm(xGoal, yGoal, zGoal, right, lj):
ycurrent = xcurrent = zcurrent = 0
theta1 = 0.1
theta2 = 0.1
theta3 = 0.1
xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3)
xchange = xcurrent - xGoal
zchange = zcurrent - zGoal
while ((xchange > 0.05 or xchange < -0.05) or (zchange < -0.05 or zchange > 0.05)):
in1 = 0.370*math.cos(theta1) #Equations in1-6 are in the pdf I linked to you (inv kinematics section)
in2 = 0.374*math.cos(theta1+theta2)
in3 = 0.2295*math.cos(theta1+theta2+theta3)
in4 = -0.370*math.sin(theta1)
in5 = -0.374*math.sin(theta1+theta2)
in6 = -0.2295*math.sin(theta1+theta2+theta3)
jacob = matrix([[in1+in2+in3,in2+in3,in3],[in4+in5+in6,in5+in6,in6], [1,1,1]]) #Jacobian
invJacob = inv(jacob) #inverse of jacobian
xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3)
xIncrement = (xGoal - xcurrent)/100 #dx increment
zIncrement = (zGoal - zcurrent)/100 #dz increment
increMatrix = matrix([[xIncrement], [zIncrement], [1]])
change = invJacob*increMatrix #multiplying both matrixes
theta1 = theta1 + change.item(0)
theta2 = theta2 + change.item(1)
theta3 = theta3 + change.item(2)
xcurrent, zcurrent = forwardKinematics(theta1, theta2, theta3)
xchange = xcurrent - xGoal
zchange = zcurrent - zGoal
print "Xchange: %f ZChange: %f" % (xchange, zchange)
print "Goals %f %f %f" % (theta1, theta2, theta3)
right.set_joint_positions(theta1) #First pitch joint
right.set_joint_positions(theta2) #Second pitch
right.set_joint_positions(theta3) #Third Pitch joint
def forwardKinematics(theta1, theta2, theta3):
xcurrent = .3708 * math.sin(theta1) + .374 * math.sin(theta1+theta2) + .229 * math.sin(theta1+theta2+theta3)
zcurrent = .3708 * math.cos(theta1) + .374 * math.cos(theta1+theta2) + .229 * math.cos(theta1+theta2+theta3)
return xcurrent, zcurrent
|
I have two servo motors that I rigged up to use as a telescope remote focuser. The idea is to turn one servo by hand and use the power generated to turn the other, which is geared to a telescope focuser knob. I noticed that when the two servos are electrically connected, it is noticeably harder to turn a servo compared to turning it by itself. I tried changing the polarity of the connection hoping it would help, but it is still harder to turn the servo when they are connected. Does anyone know why this is?
|
I want to turn some motors using my raspberry pi. I am able to turn an LED on and off using the 3.3V GPIO pin. For the motors, I tried using a L293D chip as per the instructions on this link.
What happened is that the very first time I set the circuit up for one motor, it worked perfectly. But then, I moved the pi a little and the motor has since refused to work. I even bought a new pi and still no luck with the circuit. I then bought a L298N board that fits smugly on top of the GPIO pins of the pi and followed the instructions on the this video
Still no luck, the motor just won't run with either pi. I am using four AA batteries to power the motor and a connecting the pi to a power supply from the wall. What could possibly be the problem here?
|
For a quadcopter, what is the relationship between roll, pitch, and yaw in the earth frame and acceleration in the x, y, and z dimensions in the earth frame? To be more concrete, suppose roll ($\theta$) is a rotation about the earth frame x-axis, pitch ($\phi$) is a rotation about the earth frame y-axis, and yaw ($\psi$) is a rotation about the z-axis. Furthermore, suppose $a$ gives the acceleration produced by all four rotors, i.e. acceleration normal to the plane of the quadcopter. Then what are $f, g, h$ in
$$a_x = f(a,\theta,\phi,\psi)$$
$$a_y = g(a,\theta,\phi,\psi)$$
$$a_z = h(a,\theta,\phi,\psi)$$
where $a_x$, $a_y$, and $a_z$ are accelerations in the $x$, $y$, and $z$ dimensions.
I've seen a number of papers/articles giving the relationship between x,y,z accelerations and attitude, but it's never clear to me whether these attitude angles are rotations in the earth frame or the body frame.
|
I am trying to solve a forward kynematics problem for a 3DOF manipulator.
I am working with the Robotics Toolbox for MatLab created by Peter Corke and after calculte the DH parameters and introduce them into MatLab to compute the fordward kynematics the plotted robot is not what it should be.
I guess I made some mistakes calculating the DH parameters.
Attached is the file where you can see what are the DH frames calculated for each joint and the DH parameters for each frame.
Anyone could give me a clue whether this is the correct answer?
Here is the image with the frames calculated by me.
And here the robot I get from Matlab (using the Robotics Toolbox by P.Corke)
|
Hello I am trying to build a Quadcopter for a school project and I need to finish quick before our mission trip because I was asked to finish it before then so that I could take it. But I am having a problem figuring out which charger would work with my ZIPPY Compact 6200mAh 4s 40c Lipo Pack
Here are the specs on the battery
Capacity: 6200mAh
Voltage: 4S1P / 4 Cell / 14.8V
Discharge: 40C Constant / 50C Burst
Weight: 589g (including wire, plug & case)
Dimensions: 158x46x41mm
Balance Plug: JST-XH
Discharge Plug: HXT4mm
Also I will be running the Tarot T4-3D Brushless Gimbal for GoPro (3-Axis)
and if anyone can tell me a good battery to run it off of or maybe it would be better to run it off my main battery.
Thanks in Advance
|
What 2D SLAM implementations (preferably included in ROS) can be used with simple distance sensors like IR or ultrasonic rangefinders?
I have a small mobile platform equipped with three forward facing ultrasonic sensors (positioned at 45 degrees, straight ahead, and -45 degrees), as well as a 6-DOF accel/gryo and wheel encoders, and I'd like to use this to play around with a "toy" SLAM implementation. I don't want to waste money on a Kinect, much less a commercial laser rangefinder, so methods that require high-density laser measurements aren't applicable.
|
How can we achieve this kind of rotation to enable maximum trapping of solar rays during the day?
|
I am a new learner of iRobot. I am trying to program it to control the movement of the Create 2. After glancing through the existing project, I find most of them are based on sending commands to Roomba through a cable.
Is there anyway to embed the code in and let the Roomba behave accordingly? If there is not such method, which kind of API tool do you think is easiest for beginner?
|
I am working with an iRobot Create 2 and I work with others around me. Whenever I turn the robot on, send it an OI reset command, etc., it makes its various beeps and noises. I would like to not have this happen since I find it a little annoying and I'm sure those who have to work around me would like to have things quiet so they can concentrate on their work. Is there a way to accomplish turning off the beeps (while still being able to easily re-enable them), or am I out of luck?
|
I am starting to assemble a quadrotor from scratch.
Currently, I have this:
Structure;
an IMU (accelerometer, gyro, compass);
4 ESCs and DC motors;
4 propellers;
Raspberry Pi to control the system, and;
LiPo battery.
I have calibrated the ESCs and the four motors are already working and ready.
But now I am stuck.
I guess the next step is to dive deeply in the control system, but I am not sure where to begin. I read some articles about the control using PIDs, but I don't know how many should I use, or whether I need to model the quadrotor first to compute kinematic and dynamic of the quadrotor inside the RPi.
Sorry if the question is too basic!
More details
The structure is from a kit. Well, all I have now is the ESCs calibrated, although I do not have documentation of them to adjust the cut off voltage for the LiPo battery. I have been made tests with some Python code I found to have PWM outputs for the motors and to control I2C bus to communicate with IMU.
One of my problems now is that I need RPIO library for PWM and the quick2wire-python-api to work with the I2C libraries from the MIT to control my IMU but as far as I know RPIO works with Python2 and quick2wire works with Python3 so I don't know how to manage this.
So actually, I have no code yet to control the four motors in parallel, only have testing code to test them separately and also with the IMU.
About the IMU, I am still learning how to work with it and how to use the MIT library. The unit includes those sensors:
ADXL345
HMC5883L-FDS
ITG3205
You can see a picture of the quadrotor below,
So as I said before, I would like to know how to handle the control system and how it is implemented inside the Raspberry Pi, and then start to work with the Python code to assemble the motors, the IMU and the control.
|
I am working on a robotic application, and I want to control the torque (or current) of brushless DC motors. There are many BLDC speed controllers but I could not find anything related to torque or current.
Instead of continuously spinning, the motor is actuating a robotic joint, which means I need to control the torque at steady-state, or low-speed, finite rotation.
I am looking for a low-cost, low weight solution, similar to what Texas Instruments DRV8833C Dual
H-Bridge Motor Drivers does for brushed DC motors.
|
I am computer science student and I have no knowledge on robotics.
In my project, I am trying to find controllers for modular robots to make them do specific tasks using evolutionary techniques. For the moment I am doing this in a simulator, but if I want to make physical robots I have to know a priori the components to add to the robot, where do I place them, especially if modules of robot are small (cubes of 5*5*5cm)...
So my questions are:
What are must have components to make physical robot ? (arduino, batteries, sensors, ...)
For a small robot how many batteries do I need ?
If modules have to communicate with wifi, do I have to put a wifi card on each module?
I want to add an IMU. Is its position important, I mean do I have to put it in the middle of the robot ?
Thank you very much.
|
I would like to put a train on a track and control its movement with high precision left and right using a wireless controller.
What is the best way to do it?
|
Can anyone recommend an IR distance sensor that works on black surfaces? I'm looking for something to use as a "cliff" sensor, to help a small mobile robot avoid falling down stairs or off a table, and I thought the Sharp GP2Y0D805Z0F would work. However, after testing it, I found any matte black surface does not register with the sensor, meaning the sensor would falsely report a dark carpet as a dropoff.
Sharp has some other models that might better handle this, but they're all much larger and more expensive. What type of sensor is good at detecting ledges and other dropoffs, but is small and inexpensive and works with a wide range of surfaces?
|
I found CAD files for the Create on the ROS TurleBot download page (.zip),
and shells on the gazebo sim page.
Any ideas where the files for the Create 2 could be found?
|
I'm trying to make artificial muscles using nylon fishing lines (see http://io9.com/scientists-just-created-some-of-the-most-powerful-muscl-1526957560 and http://writerofminds.blogspot.com.ar/2014/03/homemade-artificial-muscles-from.html)
So far, I've produced a nicely coiled piece of nylon fishing line, but I'm a little confused about how to heat it electrically.
I've seen most people say they wrap the muscle in copper wire and the like, pass current through the wire, and the muscle acuates on the dissipated heat given the wire resistance.
I have two questions regarding the heating:
1) isn't copper wire resistance extremely low, and thus generates very little heat? what metal should I use?
2) what circuit should I build to heat the wire (and to control the heating)? Most examples just "attach a battery" to the wire, but afaik that is simply short-circuiting the battery, and heating the wire very inneficiently (and also may damage the battery and it could even be dangerous). So what's a safe and efficient way to produce the heat necessary to make the nylon muscle react? (I've read 150 centigrads, could that be correct?) for example with an arduino? or a simple circuit in a breadboard?
thanks a lot!
|
I want to build an automatic sliding window shutter and need help
with part selection and dimensioning.
Some assumptions:
window width 1.4 m
sliding shutter weight 25 kg
max speed 0.07 m/s
max acceleration 0.035 m/s^2
pulley diameter 0.04m.
Leaving out friction I need a motor with about 0.02 Nm of torque and a rated speed of 33 rpm.
What I would like to use:
motor controller with soft-start and jam protection,
dc motor 24V,
Pulleys and timing belts.
Would you suggest other components or a different setup?
How do I connect motor and pulleys (clamping set?)?
Do I need additional bearings because of the radial load?
M=P
M=B=P
M=P=B
M=B=P=B
(M motor, P pulley, B bearing, = shaft)
If so I have to extend the motor shaft. What would I use for that (clamp collars, couplings?)?
What width do I need for the belts? Which belt profile (T, AT, HDT) should I use?
Update
The construction I am aiming for resembles the one which can be seen on page 6 (pdf numbering) here.
|
I have a 3D point in space with it's XYZ Coordinates about some Frame A. I need to calculate the new XYZ coordinates, given the angular velocities of each axis at that instant of time about Frame A
I was referring to my notes, but I'm a little confused. This is what my notes say:
https://i.stack.imgur.com/hoREn.jpg
As you can see, i can calculate the angular velocity vector w given my angular velocities. But I'm not sure how this translates to how to calculate my new XYZ position! How can i calculate the RPY values this equation seems to need from my XYZ, and how can i calculate my new position from there
|
I'm a researcher in a lab that's starting work on some larger humanoid/quadruped robots as well as a quadcopter. Currently, we have several power supplies that have a max rating of 30V/30A and our modified quadcopter easily maxes out the current limit with only half of its propellers running. It seems like most power supplies are meant for small electronics work and have fairly low current limits. I think that I want to look for power supplies that are able to provide between 24-48V and higher than 30A for an extended period of time.
1.) Is this unreasonable or just expensive?
2.) Do most labs just connect PSUs in series to get higher voltages?
Thanks for the input.
|
I have a preliminary design for a legged robot that uses compliant elements in the legs and in parallel with the motors for energy recovery during impact as well as a pair of flywheels on the front and back that will oscillate back and forth to generate angular momentum. I'd like to create a dynamic simulation of this robot in order to be able to test a few control strategies before I build a real model. What simulation package should I be using and why?
I have heard good things about MSC Adams, namely that it is slow to learn, but has a lot of capability, including integration with matlab and simulink. I have also heard about the simmechanics toolbox in matlab, which would be nice to use since I already am decent with CAD and know the matlab language. I am not yet familiar with simulink, but have used Labview before.
|
Is kinematic decoupling of a 5DOF revolute serial manipulator also valid?
The three last joints is a spherical joint. Most literatures only talks about decoupling of 6DOF manipulators.
Thanks in advance,
Oswald
|
I have a motor with a stall current of up to 36A. I also have a motor controller which has a peak current rating of 30A. Is there any way I could reduce the stall current or otherwise protect the motor controller?
I realize the "right" solution is to just buy a better motor controller, but we're a bit low on funds right now.
I thought of putting a resistor in series with the motor and came up with a value of 150mΩ, which would reduce the maximum current draw to 25A (given the 12V/36A=330mΩ maximum impedance of the motor). Is there any downside to doing this? Would I be harming the performance of the motor beyond reducing the stall torque?
|
I'm reading from Astrom & Murray (2008)'s Feedback Systems: An introduction for scientists and engineers about the difference between feedback and feedforward. The book states:
Feedback is reactive: there must be an error before corrective actions are taken. However, in some circumstances, it is possible to measure a disturbance before the disturbance has influenced the system. The effect of the disturbance is thus reduced by measuring it and generating a control signal that counteracts it. This way of controlling a system is called feedforward.
The passage makes it seem that feedback is reactive, while feedforward is not. I argue that because feedforward control still uses sensor values to produce a control signal, it is still reactive to the conditions that the system finds itself in. So, how can feedforward control possibly be any different from feedback if both are forms of reactive control? What really separates the two from each other?
A illustrative example of the difference between the two would be very helpful.
|
The title pretty much says it all. I'm on a team that is currently building a robotic arm for the capstone project of my engineering degree, our design is similar to the Dobot (5 degrees of freedom). We purchased our 6 servomotors, and each one requires 2A at 6V.
From my preliminary research, I haven't been able to find a power source that could satisfy this. We'd rather not purchase six individual AC/DC power source for each servo, and we've heard that these can introduce problems, as they aren't necessarily voltage-regulated. Another suggestion we've received is to buy a computer power source, and modify it to output our the voltage and amperage we need. This raises some concerns, since our professor running the course might find this dangerous.
We'd like some input into how we can power our servos effectively, without going overboard on costs (we are students, after all).
Thanks!
|
I'm currently in a (risky) project that involves me building the fastest quad I can afford.
I'm trying to get something close to this extremely fast warpquad
After reading a lot about quadcopters, as I know I can buy all this and it should fit together and fly without any problem.
Motors: Multistar Elite 2306-2150KV
ESC: Afro Race Spec Mini 20Amp
Quanum neon 250 carbon racing frame(I love how it looks)
6Inch Props
CC3D flight controller
4S 1400mah 40-80C Battery
Any 6ch radio
My questions are:
Am I wrong or missing something? as I had only read about it (thinking this is a common build for racer quad).
Will this overheat (bad consequences) if I let it drain the full battery at 100% throttle?
Will this fly at least 4 minutes under the previous conditions?
Should I get a higher C-rating battery?
As I can't find better motors of that size, is the only way to improve its speed by putting a 6S battery? and what would happen if I do it?
Should I put the 6inch props or 4inch? I know 4inch should get faster rpm changes but will it be noticeable at these sizes?
And in general, any tips to make it faster will be welcome.
Thanks.
|
I am working with a Create 2 and I am executing a simple sequence like (in pseudocode):
create serial connection from Macbook to Create
start the OI with by sending the 128 code
send a pause-stream command (just to be safe)
initiate the data streaming with ids: [29, 13]
every 0.5 seconds for 15 seconds:
poll the streamed sensor data and print it
send a pause-stream command before shutdown
send a 128 to put the robot in "passive mode" (I have also tried 173)
close the serial connection
The outcome when I run the above program repeatedly is that it works the first time, I see sensor data (that seems to not change or be reactive) printing to the screen, but on future runs no serial can be read and the program crashes (because I am throwing an exception because I want to get this problem ironed out before getting to far along with other things). If I unplug and replug my USB cable from my Macbook, then the program will work for another run, and then fall back into the faulty behavior.
I do not experience this issue with other things like driving the robot, I am able to run programs of similar simplicity repeatedly. If I mix driving and sensor streaming, the driving works from program run to program run, but the data streaming crashes the program on the subsequent runs.
I have noticed that if I want to query a single sensor, I need to pause the stream to get the query response to come through on the serial port, and then resume it. That is why I am so inclined to pause/restart the stream.
Am I doing something wrong, like pausing the stream too often? Are there other things I need to take care of when starting/stopping the stream? Any help would be appreciated!
EDIT:
I should note that I am using Python and pyserial. I should also note, for future readers, that the iRobot pushes its streamed data to the laptop every 15ms where it sits in a buffer, and the data sits there until a call to serial.read() or to serial.flushInput(). This is why it seemed that my sensor values weren't updating when I read/polled every half second, because I was reading old values while the current ones were still buried at the back of the buffer. I worked around this issue by flushing the buffer and reading the next data to come in.
EDIT 2:
Sometimes the above workaround fails, so if I detect the failure, I pause the stream, re-initialize the stream, and read the fresh data coming in. This seems to work pretty well. It also seems to have solved the issue that I originally asked the question about. I still don't know exactly why it works, so I will still accept @Jonathan 's answer since I think it is good practice and has not introduced new issues, but has at least added the benefit of the robot letting me know that it has started/exited by sounding tones.
|
Given two robot arms with TCP (Tool Center Point) coordinates in the world frame is:
$X_1 = [1, 1, 1, \pi/2, \pi/2, -\pi/2]$
and
$X_2 = [2, 1, 1, 0, -\pi/2, 0]$
The base of the robots is at:
$Base_{Rob1} = [0, 0, 0, 0, 0, 0]$
$Base_{Rob2} = [1, 0, 0, 0, 0, 0]$
(The coordinates are expressed as successive transformations, X-translation, Y-translation, Z-translation, X-rotation, Y-rotation, Z-rotation. None of the joint axes are capable or continuous rotations.)
How many degrees does the TCP of robot 2 have to rotate to have the same orientation as the TCP of robot one?
Is the calculation
$\sqrt{(\pi/2 - 0)^2 + (\pi/2 - (-\pi/2))^2 + (-\pi/2 - 0)^2}$
wrong? If yes, please specify why.
UPDATED:
is the relative orientation of the two robots [π/2,π/2,−π/2]−[0,−π/2,0]=[π/2,π,−π/2]? but the euclidean distance cannot be applied to calculate angular distance?
In other words:
While programming the robot, and tool frame is selected for motion, to match the orientation of the other one, i would have to issue a move_rel($0, 0, 0, \pi/2, \pi, -\pi/2$) command, but the executed motion would have magnitude of $\pi$?
While programming the robot, and world frame is selected for motion, to match the orientation of the other one, i would have to issue a move_rel($0, 0, 0, \pi, 0, 0$) command, and the executed motion would have magnitude of $\pi$?
|
I have a robotic arm and a camera in eye-in-hand configuration. I know that there is a relationship between the body velocity $V$ of the camera and the velocities $\dot s$ in the image feature space that is $\dot s=L(z,s) V$ where $L$ is the interaction matrix. I was wondering if one can find a mapping (a so called diffeomorphism) that connects the image features' vector $s$ with the camera pose $X$. All I was able to find is that it is possible to do that in a structured environment which I don't fully understand what it is.
|
I am not quite sure if I quite understand the difference between these two concepts, and why there is a difference between these two concept.
Yesterday I was trying to compute the jacobian needed for an inverse kinematics, but the usual input I provided my transformation in the Forward kinematics being the Points P and xyz could not be applied, The transformation matrix was given a state vector Q, at which the the Tool position could be retrieved...
I am not sure if understand the concept quite well, and can't seem to the google the topics, as they usually include terminologies which makes the concepts too simple (Angle calc and so on.. )
I know it might be pretty much to ask, but what form of input is needed to compute the jacobian ?, and what and why is there a difference between forward and inverse kinematics?..
|
Note before I start: I have not actually put anything together yet, i'm still just planning, so any changes that require a shape change or anything like that are accepted.
I'm working on making a walking robot with my arduino and 3d printing all the pieces I need. It will have four legs, but since it needs to be mobile, I didn't want the power supply to be huge. I've decided it would be best if I can get each leg to only require 1 servo, at 5V each. I know how to get the leg to move back and forth, but i want to be able to lift it in between; before it brings the leg forward, it needs to lift up the foot.
The only thing I can think of is the rotation maybe locking some sort of gear.
When a motor begins rotating clockwise, how can I have it power a short motion to move an object toward itself, and when it begins moving counterclockwise to power the same object a short distance away from itself?
The servos I am using have 180* of rotation, so they don't go all the way around in a loop.
also: don't know if it will be important or not, but because of the peculiar construction of the foot, it would be best if it was lifted straight up, rather than up at an angle, but it isn't 100% necessary.
Are there any robots that already do this? if so, I'm unaware of them. Thanks for your time.
|
I understand that most of the self-driving cars solutions are based on Lidar and video SLAM.
But what about robots reserved for indoor usage? Like robot vacuums and industrial AGVs? I see that Lidar is used for iRobot and their latest version uses VSLAM. AGVs also seem to use Lidar.
|
OK, let's say we have a tech request for a robotic system for peeling potatoes, and a design is as follows:
One "arm" for picking up a potato and holding it, rotating when needed.
Another "arm" for holding a knife-like something which will peel the skin from the potato.
Arm picks up a potato from first container, holds it over trash bin while peeling, then puts peeled potato in second container.
For simplicity a human rinses peeled potatoes, no need to build automatic system for it.
In first iteration even 100% spherical peeled potatoes are OK, but ideally would be good to peel as little as possible, to minimize the wastes.
Question:
I know that we're very, very far away from building such a system. Nevertheless, what are the purely technical difficulties which needs to be solved for such a robot to be built?
EDIT
Let's assume we stick to this design and not invent something radically different, like solving the problem with chemistry by dissolving the skin with something. I know that the problem of peeling the potatoes is currently being solved by other means - mainly by applying friction and a lot of water.
This question is not about it. I am asking specifically about the problems to be solved with the two-arms setup using the humanlike approach to peeling.
|
Hi,
Here I have added 2 options for connecting encoder on shaft.
Motor, gearhead and shaft is connected using coupling. But where will be best place for encoder (To avoid backlash from coupling and gearhead).
whether through hollow encoder is available? (see option 1).
I dont know which one will be best for this kind of system.
Which one is widely using arrangements?
Options 3 is Encoder will be placed before the motor.
|
I've calculated a DH Parameter matrix, and I know the top 3x3 matrix is the Rotation matrix. The DH Parameter matrix I'm using is as below, from https://en.wikipedia.org/wiki/Denavit%E2%80
Above is what I'm using. From what I understand I'm just rotating around the Z-axis and then the X-axis, but most explanations from extracting Euler angles from Rotation matrixes only deal with all 3 rotations. Does anyone know the equations? I'd be very thankful for any help.
|
My work has an older Fanuc robot ( Arc Mate 100-iBe RJ3iB, Fanuc AWE2 teach pendant with Powerwave 355M) and the old operator/programmer has left. I have taken over his job and cant find out how to turn down the voltage and wire feed speed because it occasionally burns through parts. I tried manually putting in voltage and wire feed speed but it seems it will only accept the previous weld schedules 1-8 and if i mess with them that will affect other programs using those. I just need someone to please point me in the right direction.
P.S. Typed on phone , sorry if sloppy.
|
I'm working on a robotic hand and I would like to simulate different joints and tendon insertion points before starting to actually build it.
I've been googling and found things like Solidworks and Autodesk, that seem very costly for a hobbyst like me but also I don't quite fully understand their capabilities (just CAD? 3D modelling but not simulation? Simulation but not interactive?). I've also found things like FreeCAD which seem to me somehow abandoned or just for CAD and not for simulation.
Another requirement would be interactivity of the simulation, not just rendering.
I don't have a problem with commercial software, but I'm looking for a reasonable cost for a hobbyst, not an engineering company.
Is there a software out there that meets all this requirements? Or should I use several programs each for a specific purpose?
Thanks!
|
I'm building quadcopter from scratch, software is implemented on STM32F4 microcontroller. Frequency of main control loop equals 400Hz.
I've though everything is almost finished but when i've mounted everything and started calibration of PIDs i faced a problem.
It was impossible to adjust PID parameters properly.
So i started test with lower power (not enough to fly) and i've managed quite fast adjust PID for roll but when i've increased power problems with control came back.
After that i've done more measurements.
I didn't make test with blades but probably this is even worse and that is why i cannot calibrate it.
If problem is due to vibration how can i fix it?
If something else is cause of that symptom, what is it?
Can i solve this through better controls and data fusion algorithms?
Now i use complementary filter for acc and gyro sensors data fusion in roll and pitch.
|
I am new to robotics,
I will be controlling DC motors from Android device through USB
For this I have selected L298N motor controller(After watching YouTube videos )
And got some DC motors
I have no idea how do I connect this to Android device via USB cable
Help appreciated
Ref:
https://www.bananarobotics.com/shop/L298N-Dual-H-Bridge-Motor-Driver
https://youtu.be/XRehsF_9YQ8
PS: All I know is programming android
|
I have a small bot(around 4-5kg with wheels) which is to be pushed without contact by another bot. I plan to do this using a bru and a propeller. I am having problems selecting the right combination. Please help me with these questions:-
Should the bldc be high kv or low kv(will i need high rpm or low rpm)
What is the ideal propeller to use with the motor so that i can create enough thrust to get the 'small' bot in motion and keep it in motion?
What are the other criteria i should keep in mind while selecting.
|
Say I have a motor and I want it to spin at exactly 2042.8878 revolutions per minute. Say I have a very precise sensor to detect the RPM of the motor to a resolution of 1/1000th of a revolution per minute.
Can I produce a PWM signal which can match the speed to that degree of
precision?
What variables in the signal parameters would I have to adjust to get the precision if possible?
Would I have to use additional circuitry between the motor and the driver?
Would I have to design the signal/circuitry around the specific specifications of the motor?
Should I just use a stepper motor?
This is assuming I am using a microcontroller to measure the motor's speed and adjust the signal in real-time to maintain a certain speed.
|
Not sure if I am posting this question in the correct community, as it relates primarily to reinforcement learning. Apologies early on if this is not so.
In reinforcement learning many algorithms exist for 'solving' the cart-pole problem; that of balancing a mass on the edge of a stick, connected to a cart on a hinge, which has 1 DoF. There is TD learning, Q-learning and many other on and off-policy methods. There is also the more recent, model-based policy search method PILCO.
What I am really wondering, I suppose is more of a physics question: is there a need for active control? Why is it not possible to find the one point for the cart, which prevents the mass to move, even incrementally, left or right as it sits atop the pole? Why does it always 'fall'?
|
I am currently applying path planning to my robotic arm (in Gazebo) and have chosen to use an RRT. In order to detect points of collision, I was thinking of getting a Point Cloud from a Kinect subscriber and feeding it to something like an Octomap to have a collision map I could import into Gazebo. However, there is no Gazebo plugin to import Octomap files and I do not have enough experience to write my own. The next idea would be to instead feed this point cloud to a mesh generator (like Meshlab) and turn that into a URDF, but before starting I'd rather get the input of somebody far more experienced. Is this the right way to go? Keep in mind the environment is static, and the only things moving are the arms. Thank you. Below is just a picture of an octomap.
|
Is it possible to decouple a 5DOF manipulator?
This question I asked earlier and I believe I got the right answers but I never show the drawings of the manipulator and now I'm hesitating during setup of the DH parameters for Forward Kinematics. See drawing depicted here.
|
In order to perform a cyclic task, I need a trajectory planning algorithm. This trajectory should minimize jerk and jounce.
When I search for trajectory planning algorithms, I get many different options, but I haven't found one which satisfies my requirements in terms of which values I can specify. An extra complicating factor is that the algorithm should be used online in a system without too much computing power, so mpc algorithms are not possible...
The trajectory I am planning is 2D, but this can be stripped down to 2 trajectories of 1 dimention each. There are no obstacles in the field, just bounds on the field itself (minimum and maximum values for x and y)
Values that I should be able to specify:
Total time needed (it should reach its destination at this specific
time)
Starting and end position
Starting and end velocity
Starting and end acceleration
Maximum values for the position.
Ideally, I would also be able to specify the bounds for the velocity, acceleration, jerk and jounce, but I am comfortable with just generating the trajectory, and then checking if those values are exceeded.
Which algorithm can do that?
So far I have used fifth order polynomials, and checking for limits on velocity, acceleration, jerk and jounce afterwards, but I cannot set the maximum values for the position, and that is a problem...
Thank you in advance!
|
I'm building a line following robot. I have made different chassis designs. The main prototype I'm using is a rectangle base. At one side motors are placed. On the other side of the rectangle caster wheel is placed in the middle. Look at the following image.
By varying the values of distance, I have seen that the stability of the robot is varying rapidly.
I'm driving the robot using PID. I have seen that for some chassis designs it is very hard(sometimes impossible) to calculate correct constant values. And for some chassis it is very easy. By the word stability I meant this. I have a feeling that the robot dimensions, distance values and that stability has a relationship..
Is there an equation or something that can be used to estimate the value of the distance when the width of the robot is known..?
Other than that is there a relationship between robot weight and diameter of the wheel or robot dimensions and the diameter..?
Thanks for the attention!!
|
so i'm really interested in robotics.. I'm not really a robot expert as i have no experience on creating one. I just like them. Anyway, I am always wondering if its possible to build a robot that can transfer itself to different devices and still function. I mean, if you want that robot to transfer itself(THE DATA that making it function or whatever you call it) to your laptop so you can still use it while you are away or anything.. Does creating one require advanced computing and knowledge? Is it kind of creating a artificial intelligence?. When it think of this i would always thought of J.A.R.V.I.S since he can go to Stark Suit and communicate with him.
Translated into robotics terminology by a roboticist:
Is it possible to create software for controlling robot hardware that can transfer itself to different devices and still function. Could it transfer itself to your laptop and collaborate with you using information it gathered while it was in it's robot body?
Does creating software like this require advanced knowledge and computing? Is software like this considered to be artificial intelligence?
I am serious about this question sorry to bother or if anyone will be annoyed./
|
I see there are things like glass and mirror in Autodesk Inventor Professional 2016 but is there a possibility to have Venetian mirror? So that from one side it would look like a mirror and from the other side it would look like a transparent glass?
|
I'm currently working on Humanoid robot. I've solved the Forward & Inverse Kinematic relations of the robot, and they turn out to be fine. Now I want to move onto Walking. I've seen tons of algorithms & research papers but none of them make the idea clear. I understand the concept of ZMP & what the method tries to do, but I simply can't get my head around all the details that are required to implement it in a real robot. Do I have to plan my gait & generate the trajectories beforehand, solve the joint angles, store them somewhere & feed it to the motors in real-time? or Do I generate everything at run-time(a bad Idea IMO)? Is there a step-by-step procedure that I can follow to get the job done? or Do I have to crawl all my way through those Research papers, which never make sense(at least for me).
|
I have a task that involves implementing robot behaviour that will follow wall and avoid obstacles along it's path. The robot must stay at desired distance from the wall but also stick to it so it should not loose sight of it. Robot is sensing it's surrounding with ultrasonic sensor that is oscillating from left to right and filling an array of small length (10 values) with detected distances (every 10 degrees). From this reading I would like to calculate heading vector that will result in robot path similar to one shown in bottom picture:
Black(walls), red(obstacles), blue(robot), green(desired path)
|
I'm currently developing a 6 dof robotic arm. The arm is vibrating when it stop moving and I want to reduce it. Another thing is that arm is so heavy (because there is a projector inside it, lol) and I have to use spring between joints. So, can anyone tell me 1. how to select springs because my supervisor told me that proper selection of springs can reduce vibration? 2. how do I tune the PID parameters? All the joints are dynamixel servos and their PID parameters are tunable. I read article about tuning for a single servo. How do I tune these parameters for the whole arm?
|
I have a motor with an encoder. When I set the speed of the motor it should change its speed so that encoder readings per second should fit an equation $y = ax^2 + bx + c$ where x is speed value that is given to the motor and y is the encoder readings per second that should get with motor.
Encoder reading is counted in every 1ms and if it is not equal to the value of the encoder output should get from motor (it is calculated using the equation), the PWM input to the motor should vary in-order to get desired encoder output.
I want to control this value using a PID controller but I'm confused in writing equations. Any help would be appreciated..
|
I'm shopping for my first Arduino with a specific goal in mind. I need to attach 3 standard servo motors, an ArduCam Mini 2MP camera, and several LEDs. I'm trying to figure out power requirements. I assume that USB power won't be sufficient. I'm looking at 12V AC-to-DC outlet adapters and I noticed that Amps vary from ~500MA to 5A. I don't want to use batteries.
What would you recommend as minimum amperage for this setup? Is there a maximum amperage for Arduino boards? I don't want to plug it in and burn it out. If I plug in both the USB cable and a power adapter at the same time, is power drawn from both cables?
Thanks!
|
I'd like to get rgb and depth data from a kinect, and I found a little tutorial here: http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython. It's fine, but what I'd like is to be able to get the data on demand, and not as whenever the callback is triggered, assuming I won't try to get the data faster than it can be available. I'd appreciate any help - do go easy on the ROS jargon, I'm still learning...Thanks.
|
Good day to all.
First of all, I'd like to clarify that the intention of this question is not to solve the localization problem that is so popular in robotics. However, the purpose is to gather feedbacks on how we can actually measure the speed of the robot with external setup. The purpose is to be able to compare the speed of the robot detected by the encoder and the actual speed, detected by the external setup.
I am trying to measure the distance traveled and the speed of the robot, but the problem is it occasionally experiences slippage. Therefore encoder is not accurate for this kind of application.
I could mark the distance and measure the time for the robot to reach the specified point, but then I would have to work with a stopwatch and then transfer all these data to Excel to be analyzed.
Are there other ways to do it? It would be great if the external setup will allow data to be automatically sent directly to a software like Matlab. My concern is more on the hardware side. Any external setup or sensors or devices that can help to achieve this?
Thanks.
|
I have one sharp sensor and I have to use it to measure the height of a block (6cm - 12 cm). How can I accomplish this ?
Actually it is to be connected to a robot which will move near the box and determine its height.
About GP2Y0A21YK0F:
http://www.sharpsma.com/webfm_send/1489
The robot is like this: https://i.stack.imgur.com/YdKFP.jpg
If possible please suggest a solution that doesn't require moving the sensor.
But any method will do fine.
|
I have the following problem:
Given 3 points on a surface, I have to adjust a manipulator end-effector (i.e. pen) on a Baxter Robot, normal to that surface.
From the three points I easily get the coordinate frame, as well as the normal vector. My question is now, how can I use those to tell the manipulator its supposed orientation.
The Baxter Inverse Kinematics solver takes a $(x,y,z)$-tuple of Cartesian coordinates for the desired position, as well as a $(x,y,z,w)$-quaternion for the desired orientation. What do I set the orientation to? My feeling would be to just use the normal vector $(n_1,n_2,n_3)$ and a $0$, or do I have to do some calculation?
|
It's unclear as to how one goes about integrating Occupancy Grid mapping and Monte Carlo localization to implement SLAM.
Assuming Mapping is one process, Localization is another process, and some motion generating process called Exploration exist. Is it necessary to record all data as sequenced or with time stamps for coherence?
There's Motion: $U_t$, Map: $M_t$, Estimated State: $X_t$, Measurement: $Z_t$
so..
each Estimated state, $X_t$, is a function of the current motion, $U_t$, current measurement, $Z_t$, and previous map, $M_{t-1}$;
each confidence weight, $w_t$, of estimated state is a function of current measurement, $Z_t$, current estimate state, $X_t$, and previous map, $M_{t-1}$;
then each current map, $M_t$ is a function of current measurement, $Z_t$, current estimated state, $X_t$, and previous map, $M_{t-1}$.
So the question is, is there a proper way of integrating mapping and localization processes? Is it something you record with timestamp or sequences? Are you suppose to record all data, like FullSLAM, and maintain full history.
How can we verify they are sequenced at the same time to be referred to as current (i.e. measurement) and previous (measurement).
|
I've implemented a model of a ball-on-plate plant and am controlling it over a network. Below is the open loop output when excited by successive sinusoidal inputs with increasing frequencies. I know that the plant is open loop unstable, and it is cool that this figure so nicely captures the instability.
What I'd like to know is if there is other information that I can glean about the plant from the relationship between the input and the output state.
(The state is clipped at 3.1 units.)
|
What kind of systems can be used to make a torso lifting system like the one used by this robot (the black part) :
Rack and pinion
lead screw
scissor lift
can a triple tree help ?
What are the pro and cons of each system ?
How do they ensure stability ?
And finally, is there a way to draw current when lowering instead of drawing current when lifting ?
|
I am trying to get depth data from a Kinect in a ROS project. It currently looks like this:
To arrive at this, I've done:
depth_sub = rospy.Subscriber("/camera/depth/image", Image, depth_cb)
...
def depth_cb(data):
img = bridge.imgmsg_to_cv2(data, "32FC1")
img = np.array(img, dtype=np.float32)
img = cv2.normalize(img, img, 0, 1, cv2.NORM_MINMAX)
cv2.imshow("Depth", img)
cv2.waitKey(5)
I also launch openni.launch from the openni_launch package, which publishes the depth data.
I also get this weird warning from the node (can be seen in the image):
ComplexWarning: Casting complex values to real discards the imaginary part.
But as I understand it the data type is an array of 32-bit floats. Yet some of the values appear as nan.
I would like a depth image that directly corresponds to a RGB image array of the same size. I will be doing some tracking in the RGB space, and using the tracked coordinates (X,Y) from that to index into the depth array. Thanks.
edit:
Turns out, /camera/depth/image is published as an array of uint8s, but the actual data is 32bit floats (which is not listed anywhere, had to hunt it down in other people's code). Thus an array of 480x640 uint8s, interpreted as 32bit floats, in effectively "quartered" in the number of data points. Which could explain how the image is 4 times smaller (and hence accessing datapoints out of bounds = nan?), but not why there are two of them.
|
I have a robotic system I'm controlling with Arduino, is there an heuristic way to determine a proper sampling time for my PID controller? Considering I have some other things to compute on my sketch that require time, but of course a good sampling time is crucial.
Basically I have a distance sensor that needs to detect at a constant rate an object that is moving, sometimes slow, sometimes fast. I don't have a good model of my system so I can't actually tell the physical frequency of the system.
|
How mechanically robust are LiPo batteries? How much force or acceleration can they maximally withstand before failure? What is their (mechanical) shock resistance?
For some electrical components used in robots, such as IMU's, it can be found in datasheets that they can suffer mechanical failure if accelerated or loaded beyond given values. For IMU's, this is typically somewhere between $2000g$ and $10000g$ (where $1g = 9.81 m/s^2$).
I'm wondering if similar values are known for LiPo batteries, since they are known to be vulnerable components. But, is there any quantification known for their claimed vulnerability?
|
Are there any Open source implementations of GPS+IMU sensor fusion (loosely coupled; i.e. using GPS module output and 9 degree of freedom IMU sensors)? -- kalman filtering based or otherwise.
I did find some open source implementations of IMU sensor fusion that merge accel/gyro/magneto to provide the raw-pitch-yaw, but haven't found anything that includes GPS data to provide filtered location and speed info.
|
I have an R.C car and there is a program in my computer in which I can code the car to perform movements.I would like to have an application with a visual design.Where it shows the cars path.
Is there available software code for this? Saves me lots of time.
|
How can one control a combustion engine using a remote control.
Or how would you make a car controlled using a remote.
|
I am preparing for an exam in neural networks. As an example for self-organizing maps they showed the inverted pendulum problem where you want to keep the pole vertical:
Now the part which I don't understand:
$$f(\theta) = \alpha \sin(\theta) + \beta \frac{\mathrm{d} \theta}{\mathrm{d} t}$$
Let $x= \theta$, $y=\frac{\mathrm{d} \theta}{\mathrm{d} t}$, $z=f$.
Solution with SOM:
three-dimensional surface in $(x,y,z)$
adapt two-dimensional SOM to surface
Method of control
For a given $(x,y)$ find neuron $k$ for wich $w_k = [w_{k1}, w_{k2}, w_{k2}, w_{k3}]$
$f(\theta)$ is then $w_{k3}$
I guess we use the SOM to learn the function $f$. However, I would like to understand where $f$ comes from / what it means in this model.
|
I want to build robots, and right now I aim to work with Arduino boards
I know that they are compatible with c and c++, so was wondering which language is better for robotics in general?
I know how to write in java, and the fact that c++ is object oriented makes it look like a better choice for me
does c have any advantages over c++?
|
I am the moment trying to compute the transformation matrix for robot arm, that is made of 2 joints (serial robot arm), with which I am having some issues. L = 3, L1 = L2 = 2, and q = ($q_1$,$q_2$,$q_3$) = $(0 , \frac{-\Pi}{6},\frac{\Pi}{6})$
Based on this information I have to compute the forward kinematic, and calculate the position of each joint.
Problem here is though, how do I compute the angle around x,y,z.. for the transformation matrix. Using sin,cos,tan is of course possible, but what do their angle corresponds? which axis do they correspond to?
I tried using @SteveO answer to compute the $P_0^{tool}$ using the method he provided in his answer, but I somehow mess up something, as the value doesn't resemble the answer given in the example..
|
Hi I'm using "minImu 9" 9 DOF IMU (gyro, accelerometer and compass) sensor and it gives pitch roll and yaw values with a slope on desktop (no touch, no vibration, steady). Y axis is angle in degree and X axis is time in second. X axis length is 60 seconds. How can fix this?
Pitch
Roll
Yaw
Note1: minIMU code
|
I'm trying to implement a PID controller by myself and I've a question about the sum_error in I control. Here is a short code based on the PID theory.
void pid()
{
error = target - current;
pTerm = Kp * error;
sum_error = sum_error + error * deltaT ;
iTerm = Ki * sum_error;
dTerm = Kd * (error - last_error) / deltaT;
last_error = error;
Term = K*(pTerm + iTerm + dTerm);
}
Now, I start my commands:
Phase 1, If at t=0, I set target=1.0, and the controller begins to drive motor to go to the target=1.0,
Phase 2, and then, at t=N, I set target=2.0, and the controller begins to drive motor to go to the target=2.0
My question is, in the beginning of phase 1, the error=1.0, the sum_error=0, and after the phase 1, the sum_error is not zero anymore, it's positive. And in the beginning of phase 2, the error=1.0 (it is also the same with above), but the sum_error is positive. So, the iTerm at t=N is much greater than iTerm at t=0.
It means, the curves between phase 2 and phase 1 are different!!!
But to end-user, the command 1, and the command 2 is almost the same, and it should drive the same effort.
Should I set the sum_error to zero or bound it? Can anyone tell me how to handle the sum_error in typical?
Any comment will be much appreciated!!
Kevin Kuei
|
Why is there a discontinuity in the quaternion representation of my device orientation?
I'm using a SENtral+PNI RM3100+ST LSM330 to track orientation. I performed the following test:
Place the device in the center of a horizontal rotating plate ("lazy susan").
Pause for a few seconds.
Rotate the plate 360° clockwise.
Pause for a few seconds.
Rotate the plate 360° clockwise again.
I got this output, which appears discontinuous at sample #1288-1289.
Sample #1288 has (Qx,Qy,Qz,Qw) = (0.5837, 0.8038, 0.0931, 0.0675), but sample #1289 has (Qx,Qy,Qz,Qw) = (0.7079, -0.6969, -0.0807, 0.0818).
Plugging in the formulas on page 32 of this document, this corresponds to a change in orientation from (Heading, Pitch, Roll) = (108°, 0°, 142°) to (Heading, Pitch, Roll) = (-89°, 0°, 83°).
The graph of (Heading, Pitch, Roll) is also not continuous mod 90°.
Does this output make sense? I did not expect a discontinuity in the first plot, since the unit quaternions are a covering space of SO(3). Is there a hardware problem, or am I interpreting the data incorrectly?
Edit: The sensor code is in central.c and main.c. It is read with this Python script.
|
I found this website http://robotbasic.org/ and it talks about a language used for programming things related to robotics, and I want to make sure whether or not it's worth investing any time or energy into compared to other languages before I just wipe it from my browser bookmarks for good. Nowadays, are there better languages and methods for going about the same things that it talks about?
I mean, the site looks pretty old, like something from the late 90s or pre-2010, plus I never heard of it anywhere except for this site, so I wonder if it's just not relevant any more if it ever was.
|
My professor gave us an assignment in which we have to find the cubic equation for a 3-DOF manipulator. The end effector is resting at A(1.5,1.5,1) and moves and stops at B(1,1,2) in 10 seconds. How would I go about this? Would I use the Jacobian matrix or would I use path planning and the coefficient matrix to solve my problem. I'm assuming coefficient matrix but I am not given the original position in angle form. I was only taught how to use path planing when the original angles are given.
|
Good day,
I am currently working on a project using Complementary filter for Sensor fusion and PID algorithm for motor control. I viewed a lot of videos in youtube as well as consulted various blogs and papers with what to expect with setting the P gain to high or too low.
P Gain too low
easy over correction and easy to turn by hand
P Gain too high
oscillates rapidly
I have a sample video of what I think a high P gain (3 in my case) looks like. Do this look like the P gain is too high? https://youtu.be/8rBqkcmVS1k
From the video:
I noticed that the quad sometimes corrects its orientation immediately after turning few degrees (4-5 deg). However, it does not do so in a consitent manner.
It also overcorrects.
The reason behind my doubt is because the quadcopter doesn't react immediately to changes. I checked the complementary filter. It updates (fast) the filtered angle reading from sudden angular acceleration from the gyro as well as updates the long term filtered angle changes from the accelerometer (albeit slowly). If I am right, is the the P gain is responsible for compensating the "delay"?
The formula I used in the complementary filter is the following:
float alpha = 0.98;
float pitchAngleCF=(alpha)*pitchAngleCF+gyroAngleVelocityArray.Pitch*deltaTime)+(1-alpha)*(accelAngleArray.Pitch);
Here is a video for a P gain of 1: https://youtu.be/rSBrwULKun4
Your help would be very appreciated :)
|
I have a 2DOF robot with 2 revolute joints, as shown in the diagram below. I'm trying to calculate (using MATLAB) the torque required to move it but my answers don't match up with what I'm expecting.
Denavit-Hartenberg parameters:
$$
\begin{array}{c|cccc}
joint & a & \alpha & d & \theta \\
\hline
1 & 0 & \pi/2 & 0 & \theta_1 \\
2 & 1 & 0 & 0 & \theta_2 \\
\end{array}
$$
I'm trying to calculate the torques required to produce a given acceleration, using the Euler-Lagrange techniques as described on pages 5/6 in this paper.
Particularly,
$$ T_i(inertial) = \sum_{j=0}^nD_{ij}\ddot q_i$$
where
$$ D_{ij} = \sum_{p=max(i,j)}^n Trace(U_{pj}J_pU_{pi}^T) $$
and
$$
J_i = \begin{bmatrix}
{(-I_{xx}+I_{yy}+I_{zz}) \over 2} & I_{xy} & I_{xz} & m_i\bar x_i \\
I_{xy} & {(I_{xx}-I_{yy}+I_{zz}) \over 2} & I_{yz} & m_i\bar y_i \\
I_{xz} & I_{yz} & {(I_{xx}+I_{yy}-I_{zz}) \over 2} & m_i\bar z_i \\
m_i\bar x_i & m_i\bar y_i & m_i\bar z_i & m_i \end{bmatrix}
$$
As I was having trouble I've tried to create the simplest example that I'm still getting wrong. For this I'm attempting to calculate the inertial torque required to accelerate $\theta_1$ at a constant 1 ${rad\over s^2}$. As $\theta_2$ is constant at 0, I believe this should remove any gyroscopic/Coriolis forces. I've made link 1 weightless so its pseudo-inertia matrix is 0. I've calculated my pseudo-inertia matrix for link 2:
$$
I_{xx} = {mr^2 \over 2} = 0.0025\\ I_{yy} = I_{zz} = {ml^2 \over 3} = 2/3
$$
$$
J_2 =\begin{bmatrix}
1.3308 & 0 & 0 & -1 \\
0 & 0.0025 & 0 & 0 \\
0 & 0 & 0.0025 & 0 \\
-1 & 0 & 0 & 2 \\
\end{bmatrix}
$$
My expected torque for joint 1:
$$
T_1 = I\ddot \omega \\
T_1 = {ml^2 \over 3} \times \ddot \omega \\
T_1 = {2\times1\over3}\times1 \\
T_1= {2\over3}Nm
$$
The torque calculated by my code for joint 1:
q = [0 0];
qdd = [1 0];
T = calcT(q);
calc_inertial_torque(1, T, J, qdd)
$$
T_1={4\over3}Nm
$$
So this is my problem, my code $T_1$ doesn't match up with my simple mechanics $T_1$.
The key functions called are shown below.
function inertial_torque_n = calc_inertial_torque(n, T, J, qdd)
inertial_torque_n = 0;
for j = 1:2
Mnj = 0;
joint_accel = qdd(j);
for i = 1:2
Uij = calcUij(T, i, j);
Ji = J(:,:,i);
Uin = calcUij(T, i, n);
Mnj = Mnj + trace(Uin*Ji*transpose(Uij));
end
inertial_torque_n = inertial_torque_n + Mnj * joint_accel;
end
end
function U=calcUij(T,i,j)
T(:,:,j) = derivative(T(:,:,j));
U = eye(4,4);
for x = 1:i
U = U*T(:,:,x);
end
end
function T = derivative(T)
dt_by_dtheta = [0 -1 0 0
1 0 0 0
0 0 0 0
0 0 0 0];
T = dt_by_dtheta*T;
end
I realise this is a fairly simple robot, and a complicated process - but I'm hoping to scale it up to more DOF once I'm happy it works.
|
I am making a white line follower. I am using an IR sensor module based on TCRT5000. I am directly taking the 8bit ADC reading from Arduino Uno and printing the values on serial monitor. I observe that the values for white are around 25-35, which is ok. The problem arises when I try detecting an Orange (158C) surface. The sensor gives me values very close to that of white which is around 25-40.
I can use a color sensor but they are bulky and I am not sure how I can get readings faster with them since they take a finite time for sampling 'R','G' and 'B' pulses. Can someone please tell me an alternate approach to detecting the colours or any other possible solution to my problem.
EDIT: I would like to add that the line I wish to follow is 3cm in width. Hence I plan to use three sensors. Two just outside the line on either sides and one exactly at the centre. The sampling frequency of Arduino UNO is around 125KHz. Sampling IR is not an issue because it is quick but using a color sensor takes a lot of time.
|
I am trying resolve some issues i am having with some inverse kinematics.
the robot arm i am using has a camera at the end of it, at which an object is being tracked. I can from the the camera frame retrieve a position, relative to that that frame but how do i convert that position in that frame, to an robot state, that set all the joint in a manner that the camera keep the object at the center of the frame?...
-- My approach --
From my image analysis i retrieve a position of where the object i am tracking is positioned => (x,y) - coordinate.
I know at all the time the position (a) of the end tool by the T_base^tool - matrix, and from the image analysis i know the position (b) of the object relative to the camera frame for which i compute the difference as such c = b - a.
I then compute the image jacobian, given the C, the distance to the object and the focal length of the camera.
So... thats where i am at the moment.. I am not sure whether the position change retrieved from the cam frame will be seen as position of the tool point, at which the equation will become un undetermined as the length of the state vector would become 7 instead of 6.
The equation that i have must be
$$J_{image}(q)dq = dp$$
J_image(q)[2x6]: being the image jacobian of the robot at current
state q
dq[6x1]: wanted change in q-state
dp[2x1]: computed positional change...
Solution would be found using linear least square..
but what i don't get is why the robot itself is not appearing the equation, which let me doubt my approach..
|
I have started working on robotic manipulators and got into a project which deals with control of robotic manipulator using artificial neural networks (solution of inverse kinematics and trajectory generation, to be precise!).
Can someone please suggest me where to start as I have no prior knowledge about robotic manipulator and ANN and how to code them?
|
A couple years ago RoboCup competitions seems to be quite vivant issue. Now when I'm looking for some info about it, it seems to be some kind of insignificant but this may be only my first impression (I was looking for 2D simulator league and it seems that it does not even exist anymore).
So is RoboCup still alive and significant robotic issue?
|
I'm a software developer and I work for a company that I think could use some automation in its warehouse. I thought it would be fun to put together a prototype of a conveyor system that automates a manual sorting process that we do on our warehouses. I'm primarily a .NET developer so I'm wondering if there is an .NET SDK for conveyor automation.
Any other information on where to start would be helpful but is not my main question here.
|
Please help me with the following task. I have MPU 9150 from which I get acceleration/gyro and magnetometer data. What I'm currently interested in is to get the orientation and position of the robot. I can get the position using quaternions. Its quite stable. Rarely changes when staying still.
But the problem is in converting accelerometer data to calculate the displacement.
As I know its required to to integrate twice the accel. data to get position.
Using quaternion I can rotate the vector of acceleration and then sum it's axises to get velocity then do the same again to get position. But it doesn't work that way. First of all moving the sensor to some position and then moving it back doesn't give me the same position as before. The problem is that after I put the sensor back and it stays without any movement the velocity doesn't change to zero though the acceleration data coming from sensors are zeros.
Here is an example (initially its like this):
the gravity: -0.10 -0.00 1.00
raw accel: -785 -28 8135
accel after scaling to +-g: -0.10 -0.00 0.99
the result after rotating accel vector using quaternion: 0.00 -0.00 -0.00
After moving the sensor and putting it back it's acceleration becomes as:
0.00 -0.00 -0.01
0.00 -0.00 -0.01
0.00 -0.00 -0.00
0.00 -0.00 -0.01
and so on.
If I'm integrating it then I get slowly increasing position of Z.
But the worst problem is that the velocity doesn't come back to zero
For example if I move sensor once and put it back the velocity will be at:
-0.089 for vx and
0.15 for vy
After several such movements it becomes:
-1.22 for vx
1.08 for vy
-8.63 for vz
and after another such movement:
vx -1.43
vy 1.23
vz -9.7
The x and y doesnt change if sensor is not moving but Z is changing slowly.
Though the quaternion is not changing at all.
What should be the correct way to do that task?
Here is the part of code for integrations:
vX += wX * speed;
vY += wY * speed;
vZ += wZ * speed;
posX += vX * speed;
posY += vY * speed;
posZ += vZ * speed;
Currently set speed to 1 just to test how it works.
EDIT 1: Here is the code to retrieve quaternion and accel data, rotate and compensate gravity and get final accel data.
// display initial world-frame acceleration, adjusted to remove gravity
// and rotated based on known orientation from quaternion
mpu.dmpGetQuaternion(&q, fifoBuffer);
mpu.dmpGetAccel(&aaReal, fifoBuffer);
mpu.dmpGetGravity(&gravity, &q);
//Serial.print("gravity\t");
Serial.print(gravity.x);
Serial.print("\t");
Serial.print(gravity.y);
Serial.print("\t");
Serial.print(gravity.z);
Serial.print("\t");
//Serial.print("accell\t");
Serial.print(aaReal.x);
Serial.print("\t");
Serial.print(aaReal.y);
Serial.print("\t");
Serial.print(aaReal.z);
Serial.print("\t");
float val = 4.0f;
float ax = val * (float)aaReal.x / 32768.0f;
float ay = val * (float)aaReal.y / 32768.0f;
float az = val * (float)aaReal.z / 32768.0f;
theWorldF.x = ax;
theWorldF.y = ay;
theWorldF.z = az;
//Serial.print("scaled_accel\t");
Serial.print(ax);
Serial.print("\t");
Serial.print(ay);
Serial.print("\t");
Serial.print(az);
Serial.print("\t");
theWorldF.x -= gravity.x;
theWorldF.y -= gravity.y;
theWorldF.z -= gravity.z;
theWorldF.rotate(&q);
//gravity.rotate(&q);
//Serial.print("gravity_compensated_accel\t");
Serial.print(theWorldF.x);
Serial.print("\t");
Serial.print(theWorldF.y);
Serial.print("\t");
Serial.print(theWorldF.z);
Serial.print("\t");
Serial.print(deltaTime);
Serial.println();
EDIT 2:
dmpGetQuaternion, dmpGetAccel functions are just reading from the FIFO buffer of MPU.
dmpGetGravity is:
uint8_t MPU6050::dmpGetGravity(VectorFloat *v, Quaternion *q) {
v -> x = 2 * (q -> x*q -> z - q -> w*q -> y);
v -> y = 2 * (q -> w*q -> x + q -> y*q -> z);
v -> z = q -> w*q -> w - q -> x*q -> x - q -> y*q -> y + q -> z*q -> z;
return 0;
}
EDIT 3:
the library for using MPU 9150:
https://github.com/sparkfun/MPU-9150_Breakout
EDIT 4: Another example
gravity vector: -1.00 -0.02 0.02
raw accel data: -8459 -141 125
accel data scaled (+-2g range): -1.03 -0.02 0.02
gravity compensation and rotation of accel data: -0.01 0.00 0.33
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.