instruction
stringlengths 40
28.9k
|
---|
Actually , I have been since two weeks looking for convinced and final solution for my problem , actually I am completely lost , I am working on mobile robot (Rover 5) with 2 motors , 2 encoders . the controller that designed to the robot needs to know the odometery of mobile robot (X ,Y, Heading Angle ) , actually I am trying to function the encoders for this purpose , getting X ,Y, Heading Angle by measuring the traveled distance by each wheel , so to get the X ,Y, Heading Angle values , I should compute a accurate readings without missing any counts or ticks as could as possible .
The problem now is :
In the code in the attachment , while I am testing the encoders counts , I noticed that there is a difference between counts of encoders even when they spin in the same constant speed (PMW) , the difference increases as the two motors continue . so I thought that is the main cause of inaccurate odometery results .
In the output of the code (in the attachment also) the first two columns are right and left motors speed , the third & forth columns are right and left encoder counts , the fifth column is the difference between two encoders count , as you could see ,that even when the speed of two motors are approximately the same (each motor feed up with 100 PWM) there is a difference in the encoder counts and as you could see that the difference become big and big as the motors continuing spin .
One thing I thought that sending the same PWM value to two different motors will almost never produce the exact same speed , so I think that I should detect the absolute motion of the motors and adjust the power to get the speed/distance , but when I test the speed of motors after feed them with 100 PWM at same time , the two speeds were almost identical , but I noticed that there is a difference between counts of two encoders even when the motors spin in the same constant speed .
Actually , I don't know where is the problem , Is it in the code ? Is it in the hardware ? or what ? I am completely lost , I need for patient someone to help.
/* Encoder-ino.ino
*/
#define encoder0PinA 2
#define encoder0PinB 4
#define encoder1PinA 3
#define encoder1PinB 5
volatile int encoder0Pos = 0;
volatile int encoder1Pos = 0;
int WR=100; // angular velocity of right wheel
int WL=100; // angular velocity of right wheel
long newposition;
long oldposition = 0;
unsigned long newtime;
unsigned long oldtime = 0;
long vel;
long newposition1;
long oldposition1 = 0;
unsigned long newtime1;
unsigned long oldtime1 = 0;
long vel1;
int ENA=8; // SpeedPinA connected to Arduino's port 8
int ENB=9; // SpeedPinB connected to Arduino's port 9
int IN1=48; // RightMotorWire1 connected to Arduino's port 48
int IN2=49; // RightMotorWire2 connected to Arduino's port 49
int IN3=50; // RightMotorWire1 connected to Arduino's port 48
int IN4=51; // RightMotorWire2 connected to Arduino's port 49
void setup() {
pinMode(ENA,OUTPUT);
pinMode(ENB,OUTPUT);
pinMode(IN1,OUTPUT);
pinMode(IN2,OUTPUT);
pinMode(IN3,OUTPUT);
pinMode(IN4,OUTPUT);
digitalWrite(ENA,HIGH); //enable motorA
digitalWrite(ENB,HIGH); //enable motorB
pinMode(encoder0PinA, INPUT);
pinMode(encoder0PinB, INPUT);
pinMode(encoder1PinA, INPUT);
pinMode(encoder1PinB, INPUT);
// encoder pin on interrupt 0 (pin 2)
attachInterrupt(0, doEncoderA, CHANGE);
// encoder pin on interrupt 1 (pin 3)
attachInterrupt(1, doEncoderB, CHANGE);
Serial.begin (9600);
}
void loop(){
int rightPWM;
if (WR > 0) {
//forward
digitalWrite(IN1,LOW);
digitalWrite(IN2,HIGH);
} else if (WR < 0){
//reverse
digitalWrite(IN1,HIGH);
digitalWrite(IN2,LOW);
}
if (WR == 0) {
rightPWM = 0;
analogWrite(ENA, rightPWM);
} else {
rightPWM = map(abs(WR), 1, 100, 1, 255);
analogWrite(ENA, rightPWM);
}
int leftPWM;
if (WL > 0) {
//forward
digitalWrite(IN3,LOW);
digitalWrite(IN4,HIGH);
} else if (WL < 0) {
//reverse
digitalWrite(IN3,HIGH);
digitalWrite(IN4,LOW);}
if (WL == 0) {
leftPWM = 0;
analogWrite(ENB, leftPWM);
} else {
leftPWM = map(abs(WL), 1, 100, 1, 255);
analogWrite(ENB, leftPWM);
}
// to determine the speed of motors by encoders
newposition = encoder0Pos;
newtime = millis();
vel = (newposition-oldposition) * 1000 /(long)(newtime-oldtime);
oldposition = newposition;
oldtime = newtime;
newposition1 = encoder1Pos;
newtime1 = millis();
vel1 = (newposition1-oldposition1) * 1000 /(long)(newtime1-oldtime1);
oldposition1 = newposition1;
oldtime1 = newtime1;
Serial.print (vel);
Serial.print ("\t");
Serial.print (vel1);
Serial.print ("\t");
Serial.print (encoder0Pos*-1);
Serial.print("\t");
Serial.print (encoder1Pos*-1);
Serial.print("\t");
Serial.println ((encoder0Pos*-1) -( encoder1Pos*-1));
}
// 1 encoder counts
void doEncoderA(){
// look for a low-to-high on channel A
if (digitalRead(encoder0PinA) == HIGH) {
// check channel B to see which way encoder is turning
if (digitalRead(encoder0PinB) == LOW) {
encoder0Pos = encoder0Pos + 1; // CW
}
else {
encoder0Pos = encoder0Pos - 1; // CCW
}
}
else // must be a high-to-low edge on channel A
{
// check channel B to see which way encoder is turning
if (digitalRead(encoder0PinB) == HIGH) {
encoder0Pos = encoder0Pos + 1; // CW
}
else {
encoder0Pos = encoder0Pos - 1; // CCW
}
}
}
// 2 encoder counts
void doEncoderB(){
// look for a low-to-high on channel B
if (digitalRead(encoder1PinB) == HIGH) {
// check channel A to see which way encoder is turning
if (digitalRead(encoder1PinA) == HIGH) {
encoder1Pos = encoder1Pos + 1; // CW
}
else {
encoder1Pos = encoder1Pos - 1; // CCW
}
}
// Look for a high-to-low on channel B
else {
// check channel B to see which way encoder is turning
if (digitalRead(encoder1PinA) == LOW) {
encoder1Pos = encoder1Pos + 1; // CW
}
else {
encoder1Pos = encoder1Pos - 1; // CCW
}
}
}
the result:
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 2
-181 -90 3 2 1
-111 -55 5 4 1
-187 -187 9 8 2
-176 -235 12 12 1
-200 -200 16 16 1
-250 -250 21 21 1
-250 -250 26 26 1
-210 -210 31 31 1
-238 -285 36 36 1
-315 -263 41 41 1
-300 -200 47 46 2
...
-227 -272 184 182 3
-285 -285 190 187 4
-260 -217 195 193 3
-238 -285 201 199 3
...
-250 -250 1474 1473 2
-250 -250 1480 1479 0
-208 -291 1485 1485 1
-304 -260 1491 1492 1
-240 -240 1498 1498 1
-260 -260 1504 1505 0
-250 -291 1510 1511 1
-280 -240 1516 1517 1
-260 -260 1523 1523 1
...
-250 -250 2953 2948 5
-250 -291 2959 2955 6
-250 -250 2965 2961 6
-291 -250 2971 2967 5
-250 -291 2978 2973 5
-304 -250 2985 2980 8
-320 -250 2992 2986 8
...
-320 -240 3085 3075 10
-291 -291 3092 3082 12
-269 -230 3099 3089 11
-250 -291 3105 3095 11
-280 -280 3112 3102 11
-269 -230 3118 3108 12
-250 -291 3125 3115 11
...
-291 -250 3607 3587 19
-115 -269 3610 3594 17
-240 -240 3617 3601 18
-375 -291 3625 3607 19
-269 -269 3632 3614 20
-291 -250 3638 3620 20
-240 -280 3645 3627 20
-280 -240 3652 3633 18
-200 -280 3657 3640 19
-269 -230 3664 3647 19
-333 -291 3674 3653 23
-400 -280 3682 3659 23
-280 -240 3688 3666 24
-240 -280 3695 3673 24
...
-230 -269 4677 4644 32
-208 -291 4681 4651 32
-280 -240 4690 4657 35
-320 -280 4696 4664 34
-240 -240 4703 4670 34
-291 -291 4710 4677 34
-269 -230 4716 4683 34
-240 -280 4723 4690 34
-280 -240 4727 4697 32
-160 -280 4736 4703 35
-416 -291 4745 4709 38
-346 -230 4753 4716 39
...
-360 -240 6240 6190 51
-375 -291 6247 6197 51
-269 -269 6253 6203 52
-291 -250 6261 6210 53
...
-192 -269 6428 6374 56
-240 -280 6436 6380 57
-291 -250 6443 6387 57
-269 -269 6449 6394 57
...
-269 -269 7763 7687 78
-240 -280 7770 7694 78
-291 -250 7776 7700 76
-192 -269 7781 7707 76
...
-269 -230 8263 8179 84
-250 -291 8269 8186 85
-240 -240 8276 8192 88
-384 -269 8286 8199 88
-250 -291 8292 8206 88
-269 -230 8299 8212 87
-291 -291 8305 8219 88
-240 -240 8310 8225 85
...
-160 -120 8359 8276 83
-125 -166 8362 8280 82
-115 -115 8365 8283 83
-80 -120 8367 8285 82
-125 -83 8370 8288 82
-83 -125 8371 8290 82
-43 -43 8373 8291 81
-83 -83 8374 8293 82
-45 -90 8375 8294 81
-43 -43 8376 8296 81
-43 -43 8377 8296 81
-43 -43 8378 8297 81
|
I'm currently designing a linear camera slider, that will be used to hold camera equipment weighing just about 15 Kgs including all of the lenses and monitors and everything else.
For those who don't know what a camera slider is, it's a linear slider on top of which a camera is mounted and then the camera is slided slowly to create some nice footage like this.
The problem
Now, looking at the commercially available camera sliders out there, there seems to be two ways in which the motor maybe mounted on these sliders:
Motor mounted on the side:
Motor mounted directly on the carriage:
I would like to know which option would be optimal - Performance-wise (this slider maybe used vertically too, to create bottom to top slide shots), efficiency-wise and
which one of these two will be resistant to motor vibration (these motors vibrate a lot, the effects of which may sometimes leak into the produced footage).
Additional Questions
Motor mounted on the carriage directly maybe, just maybe more efficient, but it also has to carry it's own weight in addition to the 15kg camera load?
Pulling force is greater than pushing force (I have no idea why, would be great if someone explained why, atleast in this case?), so a motor mounted in the end should be able to lift vertically with ease?
Does a belt setup as shown in the first figure above really dampen the motor vibrations? Will/won't the motor vibrating on the end get amplified (because, the whole setup will be attached to a single tripod in the exact center of the slider)
Which design will be less stressful for the motor, taking inertia into consideration for both cases?
Which one of these designs will be best suitable for vertical pulling of load against gravity?
Manufacturers use both designs interchangeably, so it's hard to predict which design is better than which.
Any help would be much appreciated!
Please note, this question has been migrated from the Stackexchange Physics (and Electrical) forum by me because the mods thought it would be appropriate here.
|
It need not be as effective as lidar or it may have some disadvantages when compared with lidar. What are the probable alternatives?
Edit:
I'm intending to use it outdoors for navigation of autonomous vehicle. Is there any low cost LIDAR or is there any alternative sensor for obstacle detection?
|
I need two state linear actuator. You can have a look at the picture to understand what I mean.
Don't care about the hand !
(source: robaid.com)
I need to electrically move the things like this squares up and down. Bidirectional linear actuators are needed.
What is the cheapest and tiniest actuator (or sth else) that I can use to move this squares up and down. There are just two states ('up','down'). Don't care how much higher a square rises, when it is up.
|
I am implementing a particle filter in Java. The problem with my particle filter implementation is that the particles suddenly go away from the robot i.e the resampling process is choosing particles which are away from robot more than those which are near.It is like particles chase the robot, but always remain behind it. I am trying to find the root cause, but to no luck. Can anyone please help me where I am going wrong?
I am adding all the imp. code snippets and also some screenshots in consecutive order to make it more clear.
Details:
I am using a range sensor which only works in one direction i.e. its fixed and tells the distance from the obstacle in front. If there is no obstacle in its line of vision, then it tells the distance to boundary wall.
Code:
Calculating Range
/*
* This method returns the range reading from the sensor mounted on top of robot.
* It uses x and y as the actual position of the robot/particle and then creates Vx and Vy as virtual x and y.
* These virtual x and y loop from the current position till some obstruction is there and tell us distance till there.
*/
private int calculateRange(double x, double y, double Vx, double Vy, int counter, int loop_counter)
{
while(robotIsWithinBoundary(Vx, Vy))
{
int pace = 2;
Vx += pace* Math.sin(Math.toRadians(robot_orientation));
Vy += pace* Math.cos(Math.toRadians(robot_orientation));
counter++;
Line2D line1 = new Line2D.Double(x,y,Vx,Vy);
if(line1.intersects(obst1))
{
//System.out.println("Distance to obst1:"+counter);
loop_counter++;
break;
}
if(line1.intersects(obst2))
{
//System.out.println("Distance to obst2:"+counter);
loop_counter++;
break;
}
}
return counter;
}
/*
* This method tells us whether the robot/particle is within boundary or not.
*/
private boolean robotIsWithinBoundary(double x, double y)
{
boolean verdict = true;
if(x>680||x<0)
{
verdict = false;
}
if(y<0||y>450)
{
verdict = false;
}
return verdict;
} /*
* This method returns the range reading from the sensor mounted on top of robot.
* It uses x and y as the actual position of the robot/particle and then creates Vx and Vy as virtual x and y.
* These virtual x and y loop from the current position till some obstruction is there and tell us distance till there.
*/
private int calculateRange(double x, double y, double Vx, double Vy, int counter, int loop_counter)
{
while(robotIsWithinBoundary(Vx, Vy))
{
int pace = 2;
Vx += pace* Math.sin(Math.toRadians(robot_orientation));
Vy += pace* Math.cos(Math.toRadians(robot_orientation));
counter++;
Line2D line1 = new Line2D.Double(x,y,Vx,Vy);
if(line1.intersects(obst1))
{
//System.out.println("Distance to obst1:"+counter);
loop_counter++;
break;
}
if(line1.intersects(obst2))
{
//System.out.println("Distance to obst2:"+counter);
loop_counter++;
break;
}
}
return counter;
}
/*
* This method tells us whether the robot/particle is within boundary or not.
*/
private boolean robotIsWithinBoundary(double x, double y)
{
boolean verdict = true;
if(x>680||x<0)
{
verdict = false;
}
if(y<0||y>450)
{
verdict = false;
}
return verdict;
}
Calculating Weights
/*
* This method calculates the importance weights for the particles based on the robot_range which is
* the reading of the range sensor for the robot.
*/
private double measurementProbability(int index)
{
double probability=1;
double particle_x_position=particleListX.get(index);
double particle_y_position=particleListY.get(index);
double particle_Vx=particle_x_position;
double particle_Vy=particle_y_position;
int range_counter=0;
int loop_counter=0;
int distance = calculateRange(particle_x_position, particle_x_position, particle_Vx, particle_Vy ,range_counter, loop_counter);
probability *= calculateGaussianDistance(distance, senseNoise, robot_range);
//System.out.println(probability);
return probability;
}
private double calculateGaussianDistance(double mu, double sigma, double x )
{
double gDistance=Math.exp(-(((Math.pow((mu - x),2))/(Math.pow(sigma,2)) / 2.0) / (Math.sqrt(2.0 * Math.PI * (Math.pow(sigma,2))))));
return gDistance;
}
Resampling
/*
* This method provides a resampled particle back to the list. It chooses a particle randomly
* from the list based on the weights with replacement.
*/
private int giveResampledParticle()
{
int index = randomInteger(0, n-1);
double sample =0.0;
double maxWeight = maximumWeight();
sample += randomDouble(0, maxWeight);
while(sample > particleListProbability.get(index))
{
sample -= particleListProbability.get(index);
index = (index +1) % n;
}
return index;
}
|
I am trying to write a simple program where the robot(Lego NXT2) will follow a blue line.
#pragma config(Sensor, S1, , sensorCOLORBLUE)
//!!Code automatically generated by 'ROBOTC' configuration wizard !!//
task main()
{
while(true)
{
if(SensorValue[S1] == sensorCOLORBLUE)
{
motor[motorB] = 0;
motor[motorC] = -50;
}
else
{
motor[motorB] = -50;
motor[motorC] = 0;
}
}
wait1Msec(1);
}
I am using an nxt color sensor and the problem is that only 1 motor is moving. I know that none of the motors are broken either because I tested them out.
Can somebody help me diagnose my problem?
|
Simply, when to use brushless dc motor and when to use Servo Motor ?
what are the differences , specially when adding an encoder to the dc motor you can have the position and it will be similar to Servo Motor ?
|
I am on a robotics team that plans to compete in a competition where one of the rules is that no sort of sonic sensor is allowed to be used. I guess that limits it to some sort of EM frequency right?
Ideally, my team is looking for a simple beacon system, where beacon A would be attached to the robot, while beacon B would be attached to a known point on the competition space. Then, beacon A can give information about how far away B is. After some searching, I could only turn up laser rangefinders that required pointing at the target. I am a CS student, so I'm not familiar with the terminology to aid searches.
Another nice property would be if the beacons also gave the angle of beacon A in beacon B's field of view, although this is not necessary, since multiple beacons could be used to obtain this information.
We have an Xbox 360 Kinect working, and able to track things and give distances, but it looses accuracy over distance quickly (the arena is about 6 meters long), and this beacon should be as simple as possible. We ONLY need it for a relative position of our robot.
Alternate Solution:
Another way to solve this would be for an omni-directional beacon to only give angle information, two of these could be used to triangulate, and do the job just as well.
|
In EKF-SLAM (based-feature map) once the robot senses a new landmark, it is augmented to state vector. As a result, the size of the state vector and the covariance matrix are expanded. My question is about the uncertainty of the new landmark and its correlation with other pairs of the covariance matrix. How should I assign them? When I assign them to be zero, the error of the estimation this landmark won't change as time goes. If I assign them with very large value, the estimation is getting better every time the robot reobserves this landmark however, the error approaches to fixed value not to zero. I assume the problem id with assigning the uncertainty. Any suggestions?
|
For my quadcopter, i turn on the quadcopter while letting it stable on the ground. But i see that the Roll, Pitch fluctuate with the max difference being 15 degree. When i protect the sensor with soft material, then i observe the max difference is around 6 degree. Is this fluctuation for the quadcopter? By the way, i use complementary filter and DCM with scaling factor being 0.8 gyro and 0.2 accel
Thanks in advance!
|
I'm trying to control a plane via roll using PID controller ,
I had a problem finding the transfer function thus I used the following method :-
Fix the plane in an air tunnel
change the motor that controls the roll in fixed steps and check the
roll
thus I will have a table of roll/motor degree
next is to deduce the nonlinear function using wolfram alpha or
approximation neural network .
Is this a correct method or should I try another method ?
|
I've been using mpu6050 IMU unit ( gyro + accelerometer )
I found that I can set acc range to +/- 2g or 4g till 16 g
and same for gyro +/- 250 deg/sec , 500 deg/sec and so
I know that they are low cost and full noise , so which settings to the range are best to ensure higher accuracy ?
|
I am trying to compute forward kinematics of the Kuka youBot using DH convention:
http://www.youbot-store.com/youbot-developers/software/simulation/kuka-youbot-kinematics-dynamics-and-3d-model
The arm joint 1 and arm joint 5 are revolute and rotate about the world z-axis (pointing to the sky)
But the other 3 joints are all revolute and rotate about x-axis, let's say (points horizontally)
DH convention says the "joint distance" is along the "common normal". But unless I am mistaken, the only common normal is the y-axis, and that is also horizontal, meaning there is no joint distance.
I was thinking I would use link offset for joint1 - joint2, but then I ran into a problem with joint4 - joint5. Link offset is supposed to be along the previous z-axis, and in that case it would point horizontally out to nowhere. But link distance STILL doesn't work either, because that is the common normal distance, and as established the common normal is x-axis, also horizontal. So now I feel very screwed. I am sure there is a simple solution but I can't see it.
So I guess the question is, how do I use the DH convention for the links between 1-2 and 4-5, when the joint rotational axes are perpendicular?
|
I am designing a experiment of controlling 6 small wind turbines wirelessly. For each wind turbine, I need to measure power time series (or voltage or current time series) from the generator, and control blade pitch angle, yaw angle, and generator load (using variable resistance). The control input will be all PWM signal.
I am planning to put an Arduino UNO with a ZigBee wireless module to each wind turbine, making it measure the power time series and transmit to the central node, as well as receive the control input from the central node and command the control input to servo motors. The central node will be additional Arduino UNO.
Here are my questions:
Is it possible for each Arduino to send time series signal to central node wirelessly without interference with other Arduino? (6 wind turbines transmitting time series to a central server). If it is possible, How can I implement such network ? recommending a source for learning would be also greatly helpful.
Interface between the central node and the computer software: The algorithm in the computer need to process the received power time series and determine the optimum control input for 6 wind turbines. Then these control input should be transmitted to wirelessly to 6 wind turbines. In such case, what is the good option to interface the algorithm and the Arduino connected to the computer? Currently the algorithm is written in Matlab. I heard there is the sketch interfacing Arduino and Matlab, is it efficient enough for such project?
|
I was playing the old "confuse the cat with a flash-light" game, when I thought that I might like to program a confuse-a-cat robot.
Something, probably with tracks, which can right itself if he flips it over, and which I can program to move randomly around a room, turning at walls, making an occasional sound or flashing a light.
Since I am on a very tight budget, I wondered if there is some cheap kit which I can program ...
Arduino, Raspberry Pi, any platform, so long as it is programmable.
Thanks in advance for your help
|
I had a Doyusha Nano Spider R/C mini-copter, it's controlled by a 4ch joystick 2.4 Ghz.
I look for a low cost method to control it from the computer. The software is not a problem, but how can I transform the WIFI or the Bluetooth signal of the computer to an R/C signal compatible with the mini-copter receptor?
Or is there another solution that is low cost?
|
Sorry I am asking a mechanical question here, but, after all, where else people have experience with using motors? If there is a better forum for this, please do guide me.
Everywhere I've seen online, the stepper motor 28BYJ-48 is used in tutorials, to rotate on its own, or, at most, to spin a clothes pin attached to it. I am trying to get Arduino to work for my 10 year old kid. He's got the motor rotating, now what? How does he attach anything to it?
Don't laugh, I made him a wheel out of a raw potato. He is happy with it now. Where can I find any guidance as to what to do next?
|
I am trying to make a simple robot with few functionality for someone, one of these functionality is inflating a balloon inside the robot, I know how to control a compressor using Arduino but the problem is that the requested task is bit different here:
There must be an air exit and it must be controllable through arduino, so he can inflate the balloon to a certain pressure, and depress the air from another exit if needed (I don't know if it is possible to have a depression through the same pressure-in valvle.
I think that it can be done somehow using a solenoid 3/2 valve or something but I am bit unfocused these days and I need some hints.
Any thoughts?
|
I recently built a self-driving vehicle-type robot for a competition, and am looking to sell sensors (GPS, INS, etc.) used in order to have money for the next project. Is ebay where people tend to go looking for used sensors and hardware?
|
I have a project that requires me to be able to accurately and repeatedly rotate an object 120 degrees.
The object is small and lightweight (let's say several grams). The axis does not necessarily have to always spin the same direction. It simply needs to be able to stop reliably at 0, +/- 120, and +/-240 degrees from the origin.
I have VERY limited experience with motors and robotics, but my understanding is that a servo motor will be my best bet for accuracy (if that incorrect, please let me know).
Since I know next to nothing about these motors, the spec sheets list a lot of specs which don't mean all that much to me. I'm hoping to learn, but in the mean time, what specifications do I need to be focusing on for these requirements?
It doesn't need to be high speed. When I say accurate, it doesn't have to be absolutely perfect to the micrometer, but I would like it to be able to run through a loop of stopping at 0, 120, and 240 hundreds of times without visually noticeable variance - the more precise the better though.
To be more specific about the accuracy. Let's say the object being rotated will have a flat surface on the top at each of those 3 stopping points. Upon inspection the surface needs to appear level each and every time through hundreds of cycles.
Could these requirements be met by a servo that might be used in building a quadricopter, or am I going to be looking for something higher grade than that?
|
I am planning to control multiple Dynamixel servos (MX28T or MX-64T) wirelessly using Arduino Mega. Since this servo uses serial communication, I need an additional serial port to interface with Xbee module. Although it seems to be very common application controlling these servos wirelessly based on Arduino, I could't find any of them in web. I found the two very well constructed libraries.
https://code.google.com/p/slide-33/downloads/list. This library is for MX28T servo, which is the same servo I am trying to use, but it uses UNO;therefore, I cannot interface with Xbee.
http://www.pablogindel.com/informacion/the-arduinodynamixel-resource-page/. This library use UART1 (serial1) to interface with servo (AX-12) motors. Therefore, I can connect Xbee module to UART0. But, the problem is that this library is outdated and not compatible with MX64-T servo anymore.
So my question is here:
Is there any one who has experience in controlling Dynamixel MX24T, MX64T servo series using Xbee module simultaneously? If you have experience, please share with me.
Is it possible for Arduino Mega can interface with Xbee module using Serial1 (i.e., RX18 TX19)? If it can, I might be able to use the library1 without any modification.
|
I'm currently programming an app for a robot and I'd like to make him map a zone and then make him move autonomously from one point to another.
I have to solve a SLAM problem, but the biggest matter is that I can't use landmarks to find myself in the environment. The robot just has the abilities to move, and to make distance measurements over -120/+120 degrees using a sonar.
I can't find any simply explained algorithm that permits me to solve this SLAM problem with the no-landmark limitation.
Have you any idea ?
|
This is for a battle robot in the hobby-weight class (5.44 Kg max)
I want to drive the robot using 2 cordless drill motors rated at 14.4 volts. I have 4S LIPOs which means I have 4 x 3.7 volts or 14.8 volts. So far so good.
The problem is that I bought 2 ESCs and only afterwards noticed that they are rated for 2-3S (or max of 11.1 volts).
So my question is am I likely to damage the ESC if I use my 4S LIPOs instead of 3S LIPOs?
Or should I just buy 3S LIPOs and live with the reduced performance?
|
This is for a Hobby-weight (5.44 Kg) battle robot.
I bought two ESCs for my drive motors but the ESCs do not have a reverse function (or brake for that matter).
Is there any simple way I can achieve this through maybe either:
The R/C settings (setting middle position of joystick as stopped, top-wards as forward and bottom-wards as reverse?)
Or could I maybe achieve this using Arduino? I have a card with relay switches that I can use with the Arduino so am not worried about high voltage or current but I am worrying it could get messy..
I could just buy two new ESCs with the above features but they cost quite a bit more than the ones I already have so I would prefer to try a few tricks first - if there are any!
|
Newbie to robotics here!
I bought a 5S LIPO but now realise that it is overkill. And these things are expensive!
So, given that (as far as I know) the pack is apparently made up of individual cells of 3.7 volts each, is there any way in which I could somehow (safely) separate out the cells to get a 3S and a 2S or even single 1S cells?
|
How can I periodically estimate the states of a discrete linear time-invariant system in the form $$\dot{\vec{x}}=\textbf{A}\vec{x}+\textbf{B}\vec{u}$$
$$\vec{y}=\textbf{C}\vec{x}+\textbf{D}\vec{u} $$if the measurements of its output $y$ are performed in irregular intervals? (suppose the input can always be measured).
My initial approach was to design a Luenberger observer using estimates $\hat{\textbf{A}}$, $\hat{\textbf{B}}$, $\hat{\textbf{C}}$ and $\hat{\textbf{D}}$ of the abovementioned matrices, and then update it periodically every $T_s$ seconds according the following rule:
If there has been a measurement of $y$ since the last update: $$\dot{\hat{x}}=\hat{\textbf{A}}\hat{x}+\hat{\textbf{B}}\hat{u}+\textbf{L}(y_{measured}-\hat{\textbf{C}}\hat{x})$$
If not:
$$\dot{x}=\hat{\textbf{A}}\hat{x}+\hat{\textbf{B}}\hat{u}$$
(I have omitted the superscript arrows for clarity)
I believe that there may be a better way to do this, since I'm updating the observer using an outdated measurement of $y$ (which is outdated by $T_s$ seconds in the worst case).
|
How could I compute the shortest path between point a and b using wave planner?
I don't see how using the wave planner would give me the shortest; it would just give me a path! As far as I can tell, I would only be able to give a random path to the destination, but nothing else than that.
|
I know that some DC motors produce a lot of torque but only actually move at a very slow rate, while others do the exact opposite. I know that I need some sort of balance between torque and the RPM's of the motor for use in a underwater thruster, but I am not sure what I should favor more, torque or RPM's? Also, it would be great if someone could suggest a motor at or below the $300 range for a UROV.
|
I have a question regarding the implementation of a quadrotor's position controller.
In my Matlab model the quadrotor takes 4 inputs: a desired altitude ($Z_{des}$) and desired attitude angles($\Phi_{des}$, $\Theta_{des}$, $\Psi_{des}$) which reflects the motion described by the differential equations of the model (see last picture).
Here an insight into the implemented Matlab dynamic model. As you can see it has a structure like an inner loop controler:
Anyway...it "hovers" perfectly on the starting point. (perfect graphs :) )
Now I just need to go over and implement a sort of position controller to let the quadrotor to get from a start to a goal point, defined as usual through 3 coordinates $[X_d, Y_d, Z_d]$.
That's tricky because I don't have the same space state variables as input and output of the system. So the controller must take a vector of three coordinates and be able to output 3 different angles to get there. The only exception is the height because it will be simply bypassed by the controller and doesn't need another calculation loop. A different story is for the three angles...
My first idea was to simply create a feedback between the position given at the output of the simulated system and the desired position as in the figure above.
But that rises another question: my quadrotor model solves the following equation system:
$$
\large \cases{
\ddot X = ( \sin{\psi} \sin{\phi} + \cos{\psi} \sin{\theta} \cos{\phi}) \frac{U_1}{m} \cr
\ddot Y = (-\cos{\psi} \sin{\phi} + \sin{\psi} \sin{\theta} \cos{\phi}) \frac{U_1}{m} \cr
\ddot Z = (-g + (\cos{\theta} \cos{\phi}) \frac{U_1}{m} \cr
\dot p = \frac{I_{YY} - I_{ZZ}}{I_{XX}}qr - \frac{J_{TP}}{I_{XX}} q \Omega + \frac{U_2}{I_{XX}} \cr
\dot q = \frac{I_{ZZ} - I_{XX}}{I_{YY}}pr - \frac{J_{TP}}{I_{YY}} p \Omega + \frac{U_3}{I_{YY}} \cr
\dot r = \frac{I_{XX} - I_{YY}}{I_{ZZ}}pq - \frac{U_4}{I_{ZZ}}
}
$$
that means that they expect (as in the matlab model above) the desired angles and height.
But now I need right the inverse: given a desired position calculate the right angles!!!
For the direction is the solution really simple, since I can write something like:
Psi = atan2( (yd - yactual), (xd - xactual) );
where y and x lies on the horizontal plane. This is not so simple for the other two angles. So what can I do at this point? Just "invert" the given equations to get the desired angles?
Another idea could be to implement a simple PD or PID controller. This is much more easier given the fact that I can experiment very quickly using Simulink and get very good results. But the problem is here again: how get I the desired angles from a desired position?
|
I would like to estimate the yaw angle from accelerometer and gyroscope data. For roll and pitch estimate I've used the following trigonometric equations:
roll = atan2(Ax,Az) * RAD_TO_DEG
pitch = atan2(Ay,Az) * RAD_TO_DEG
and a simpified version of the Kalman Filter to consider also angular rates. The roll and pitch estimates are accurate (accelerometer values need to be filtered in presence of chassis vibrations).
In order to get the Yaw angle I'm using the following equation:
yaw = atan2(Ax,Ay) * RAD_TO_DEG;
but the it doesn't work. Do you have any advice?
|
I use MPU9150, also use DCM and Complimentary filter to compute roll, pitch and yaw. However, my Yaw is not so smooth. How can I solve that problem?
I looked at the datasheet of MPU9150, but I didn't see anything related to sampling frequency of magnetometer like gyro and accel.
|
I'm building an autonomous sail boat (ripped out the guts of an RC sail boat and replaced with my own mainboard etc.)
The controller board I have can accommodate both an MPU9150 and an HMC5883. Is there any advantage is using both magnetometers for a tilt-compensated heading? I'm thinking that I could compute the unit vector with soft/hard iron offsets removed for both, and then average the two vectors to get one slightly better one?
Not sure if it would yield a better result though.
|
I am looking to write and test my own control algorithms for tricopter flight. I am looking for a simulator that can simulate a tricopter but at the level of receiving simulated PWM and returning simulated gyro, compass and other sensor readings. Ideally it would also have graphics for visualization (need not be fancy). Ultimately, I want to port this to a real tricopter but at the moment I would just like to simulate it. Any suggestions for free simulators that are low level as I described?
|
I have the formulas to derive the RPM's of each wheel from the robot's linear velocity.
Now, I am trying to do the same thing for the acceleration (mainly angular acceleration).
For linear acceleration I am always assuming that the linear velocity of the wheels is the same as the robots when the robot is moving on a straight line...according to physics. Am I right?
But angular acceleration seems more complicated, specially when the robot is following a curved path (not necessarily turning in place).
Any readings or ROS packages that deal with this acceleration issue?
Thanks
|
I'm looking for a way to transport balls (diameter 50mm) 220 mm up with over a slope with a length of 120 mm. Currently I'm considering the usage of a belt system but I cannot seem to find a good belt system.
Because of space constraints within my robot, normally I would probably take a nylon belt and jam nails trough it to make little slots and then use that. However this would result in considerable reduction in available space as it means that I have to also take into account the extra space required for the nails on the way back. This means that ideally there would be a way to reduce the space used by the nails on the way back.
Does anybody have a good solution for this?
|
I'd like to track my run in an indoor tennis court. GPS won't be available so I was thinking researching for other solutions:
Accelerometer: I concluded it's a no go because while playing tennis the player makes a lot of movements that include spinning his body that can alter the data.
Then I thought that a 3/4 point IR system might help but again from what I've understood it's hard for the IR system to track the movement since they won't be able to focus on the player.
So my final thought went to radio systems but I couldn't find any info and it's also hard for me to see a theoretical solution at least on how I can mesure the movement/speed of the player.
So here is my question: Is there any existing system that is able to track random movement of an object (athlete) and give info like speed and distance? is there anywhere resources about how such a system might be achieved or at least the exact technology used for it?
Any suggestions and ideas are greatly appreaciated.
|
I want to communicate the Tiva C ARM Cortex M4 to sensorhub from TI which has multiple sensors with different I2C addresses such as MPU9150, BMP180, Temperature Sensors...
With a single I2C slave, i can communicate to it successfully, but if my project involves interface Microcontroller with both MPU9150 and BMP180, then i get stuck.
Anybody suggest me the process of commnunication in this case?
|
I want to build a simple obstacle avoider robot, but this time I want it to be self-recharging so I am building a dock for this purpose, so I want it to be able to locate the dock and go for it when battery voltage is lower than a fixed value.
I am having trouble to chose the right components for locating the dock, I think I am going to use an IR emitter on the dock so the robot can head toward it when battery is low (let's forget about the orientation problem for the moment, but if you have any thoughts about it that will be helpful) but I am not sure if the robot is able to detect the IR LED (or whatever) from a long distance (over 10 meter)
Is it possible to use this solution for this distance? If not, what do you suggest?
(If there is a simple ready solution to buy that's ok, let's say I have no budget limit)
|
I have a basic question because I'm trying to understand right now a concept that I thought it was obvious.
Looking at this video he is going to feedback the variable state x with the input of the system, which is a force f.
Now, if I'm correct it is only possibile to feedback variables which share the same units, so I expect to drive a meter through an input variable which is a meter and the difference will be then feed into the PID. Is the example in the video just to show up how to use simulink?
Or I m wrong?
|
I am trying to make a nerf sentry gun to shoot my co workers. I am building it more or less from scratch and have come to the part where I need to come up with plans to assemble it. I am looking for advice on how to mount the mg995 servos to allow them to tilt and pan. I originally thought about having a base with a metal rod through the middle and use a gear to control the pan functionality. The idea is it would mimic a skateboard truck with a gear that would turn the rod through the middle and pivot the shooting mechanism. Another idea was to have the metal plate sit on top of the servo and use one of the attachments to attach it to the top plate. The problems I see behind this is the attachment is just a small piece of plastic and over a short period of time I could see this wearing out especially if the shooting mechanism is not centered perfectly. I also need to come up with a solution to make it tilt but I think I have an idea for this to simply use a rod with a gear to turn the pvc pipe barrel.
Here are the servo's I am using
Sorry if this is the wrong forum for the question but I was unsure where else to look for some expert advice.
EDIT 1
For anyone intersted I found an example of someone doing almost exactly the same thing with blueprints. I am going a slightly simpler / cheaper route and mounting the servo to the bottom of the spinning plate between the lazy susan plate I ordered. This way I don't have to buy the gears which are rather expensive and without the gears it may reduce some of the torque.
http://projectsentrygun.freeforums.org/build-progress-gladiator-ii-paintball-sentry-t130.html
|
I have this idea or a very curious question in my mind. I am no where near professional though, but i would like it to be answered.
We all know how wind turbines can be used to generate electricity. So is it possible to create a quadcoptor that will start with some minimal power by small battery but in time will sustain and keep its system on by self generating electricity and keep on rotating its rotors on its own without other external supply?
|
I am currently building a hobby-weight (5.44kg) robot and will be using 2 x 14.4 cordless driller motors for my wheels.
The thing is I keep reading about high amperages when working with r/c models such as quadcopters BUT when I connect my cordless driller motor to my bench power supply and monitor current draw it never rises above 3.2 Amps even when I try to stop the motor by hand.
Of course in the arena in the event of a stand off I have plastic wheels which will slip so I am not too concerned about stall currents.
I am now left wondering whether I have mis-calculated or whether people make a lot of fuss about high currents for nothing. or do these currents only perhaps really apply to brush-less motors?
|
I am currently building a hobby-weight robot (5.44kg) and will be using 2 x 14.4v cordless drill brushed motors to drive my wheels.
I have read somewhere that due to "induced currents" when I turn the motor off (or reverse it presumably?) I should protect it by using a diode or a capacitor across the terminals.
Which should I use (capacitor or diode) and what are the parameters I need to consider for these components (voltage or current)?
Some answers to a similar question discussed capacitors but not diodes. Are diodes relevant?
Would I seriously damage the cordless drill (presumably quite tough) motor if I did nothing?
And don't motor controllers have any form of inbuilt protection for the motors anyway?
|
I am building a Hobby-weight robot and my weapon of choice is a spinning disk at the front.
As regards the disk I was thinking of buying commercial (grinder-type) disks and change type of disk depending on the "enemy's" chassis construction material. So for instance I would have an aluminum cutting disk if the enemy's chassis is made of aluminum and so on.
First question is therefore; do such disks do the job in practise (or break, fail to cut?)
Secondly, should I use a brushed or brush-less motor for the disk? I actually have ESCs for both but sort of feel a brushed motor will give me more torque while a brush-less motor might give me more speed. So which is more important speed or torque?
I do know - from my uncle who uses metal lathes - that machines that cut metal usually spin at a slower speed (drills, cutting wheels etc)- indeed he likes to say that metal working machines are safer than wood-working ones partially for this reason.
But I am a newbie and really would like to have an effective weapon if possible and breaking or not-cutting disks do not make such a weapon!
Also is it normal practise to use one battery for everything (drive and weapon) or have two separate batteries?
|
In a lab build I'm doing, I'm stuck at this problem, so I am fishing for suggestions.
I'm creating a turn-table type setup where I need to make readings (with a nanotube-tip probe I've already designed, similar to an AFM probe) on the very edge/circumference of a 10 cm radius disk (substrate).
The current hurdle is: I need to get the substrate disk to move circularly in steps of 0.1 mm displacement -- meaning, I occasionally need to STOP at certain 0.1mm-increment positions.
What would be a way I can achieve this, assuming an accurate feedback system (with accuracy of say ~0.1 mm, e.g., with quadrature optical encoders) is available if needed for closed-loop control?
Specs of commonly sold steppers don't seem to allow this kind of control. I'm at the moment trying to study how, e.g. hard disks achieve extreme accuracies (granted they don't have such large disks).
Certainly, direct-drive like I'm currently building (see below image) probably doesn't help!
|
I am new to Morse and robotics.
This code control the robot by giving the linear and angular velocity.
This is the scene description
from morse.builder import *
robot = ATRV()
motion = MotionVW()
motion.add_stream('socket')
robot.append(motion)
semanticL = SemanticCamera()
semanticL.translate(x=0.2, y=0.3, z=0.9)
robot.append(semanticL)
semanticR = SemanticCamera()
semanticR.translate(x=0.2, y=-0.3, z=0.9)
robot.append(semanticR)
motion.add_stream('socket')
semanticL.add_stream('socket')
semanticR.add_stream('socket')
env = Environment('land-1/trees')
env.set_camera_location([10.0, -10.0, 10.0])
env.set_camera_rotation([1.0470, 0, 0.7854])
and this is the control script
import pymorse
with pymorse.Morse() as simu:
simu.robot.motion.publish({"v": 3, "w": -1})
The robot moves well. But when I remove the semantic cameras from the scene description the robot do not move. I am confused, they are just sensor, why the robot don't move ?
|
I am currently building a hobby-weight (5.44kg) robot. The weapon will be a vertical spinning disk at the front. It will probably be a commercial one from the hardware store or I could maybe get one made.
I have 2 cordless drill motors to drive my wheels so I should be ok there, but I am still lost where it comes to what motor I should get for my weapon. I am now inclined to think it should be brush-less although I am still open to other opinions.
Can anyone please recommend a good motor (in-line brush-less) or brushed motor that will give me the speed and strength I need for the weapon?
|
I want to analyze a traffic scene. My source data is a point cloud like this one (see images at the bottom of that post). I want to be able to detect objects that are on the road (cars, cyclists etc.). So first of all I need know where the road surface is so that I can remove or ignore these points or simply just run a detection above the surface level.
What are the ways to detect such road surface? The easiest scenario is a straight and flat road - I guess I could try to registrate a simple plane to the approximate position of the surface (I quite surely know it begins just in front of the car) and because the road surface is not a perfect plane I have to allow some tolerance around the plane.
More difficult scenario would be a curvy and wavy (undulated?) road surface that would form some kind of a 3D curve... I will appreciate any inputs.
|
I am building an autonomous robot using PID control algorithm. So, far I have implemented PID using online resources/references. I am testing for stabilizing an axis of the quad copter. However, I am not successful to stabilize even one axis.
Description: My input for the PID is an angle value i.e the orientation of the quad copter measured by AHRS (a Gyroscope that measures angles) and the motors take integer values as speeds. What I am doing is,
motor_right_speed = base_speed + adjusted_value;
motor_left_speed = base_speed - adjusted_value;
adjusted_value += PID_output;
Where ajusted_value is a buffer that accumulates or subtracts the PID output value based on either PID output is +ve or -ve.
I also tried,
motor_right_speed = base_speed + PID_output;
motor_left_speed = base_speed - PID_output;
Both don't seem to be working.
I have tested using a wide range of P gain values (from very small to very large), but the quad copter only oscillates; it does not self-correct. Your help with suggestions would be greatly appreciated. Thanks!
|
I am looking for some figures surrounding the specs of brushless motors and their relative efficiency (in power usage terms) for multi-copter use.
There are 4 basic specs for motors themselves:
- Motor width (EG 28mm)
- Motor height (EG 30mm)
- "KV" - RPM per volt supplied (EG 800KV)
- wattage (eg 300w)
This would then be a 28-30 800kv 300w motor.
What i am looking for is a chart containing:
- Motor spec
- pack voltage (eg 14.8v)
- Amps drawn @ various % throttle (10% to 100% say)
- static thrust from various propellers (11x5, 12x6 etc etc)
Does such information exist?
I know its a BIT subjective as prop and motor designs vary slightly, but a baseline would be a start.
|
I've got my hands on this laser range scanner but seem to have some problem receiving any output from it.
I can't find any guide on how to set it up on the internet, so I was wondering if it is even possible to set it up for mac or shall i do it using Linux and ROS ?
|
I am working on a project but I lack advanced programming knowledge, especially about genetic algorithms. I am developing a prototype using WEBOTS 7.4.3 for the simulation. The project is to use genetic algorithms to evolve the gait of a biped robot. I have developed a physical model, but I am still uncertain about the motor choice. For the algorithm part, I find it hard to understand how to set the algorithm parameters and how to determine the fitness function. Could you please suggest a fitness function?
Thank you for your help and efforts.
|
I have been going through a code base for multi agent motion planning. And I came across a recursive tree building algorithm for the agents. I haven't been able to figure out the algorithm. Does anyone know what it is called? Or any other similar kinds of algorithms so I could read more about it?
Here is what I got from the code:
The node of the tree is as follows -
> struct AgentTreeNode {
> int begin;
> int end;
> float minX, maxX, minY, maxY;
> int left;
> int right; };
Each node has a max and min value for x and y. And also a begin, end, left and right value.
Then the tree is split either horizontally or vertically based on which limits are longer (x or y). And an optimal value is found and agents are split.
Then these split agents are recursively build again.
Thank you very much.
|
I have the following code for the ros turtlesim:
#include <ros/ros.h>
#include <std_msgs/String.h>
#include "geometry_msgs/Twist.h"
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
void disruptcb(geometry_msgs::Twist msg) {
ros::NodeHandle pubHandle;
ros::Publisher publisher = pubHandle.advertise<geometry_msgs::Twist>("turtle1/cmd_vel", 1000);
ros::Rate loop_rate(2);
double dist1=(rand()%100);
double dist2=(rand()%100);
std::cout<<dist1<<std::endl;
dist1=dist1;
dist2=dist2;
msg.linear.x+=dist1;
msg.angular.z+=dist2;
std::cout<<msg<<std::endl;
ROS_INFO("hello" );
publisher.publish(msg);
}
int main(int argc,char** argv){
srand(time(NULL));
ros::init(argc,argv, "things_going_wrong");
ros::NodeHandle nh;
ros::Subscriber sub = nh.subscribe("/ros_1/cmd_vel",1000,&disruptcb);
ros::spin();
}
the idea behind this code is to introduce a random error to then practice error recovery in my code but this node does not appear to do anything at all. I do know that my other nodes are running but this one just doesn't appear to do anything, it doesn't exit it just hangs. Anybody know how to fix this?
|
It would be easy to understand if you imagine a robotic vacuum cleaner. (For some models) It goes back to a specific place automatically to recharge. Like this, I want to make a robot which automatically goes to the place where a specific signal(like infrared ray) is emitting.
Following is the scenario that i've imagined.
1.Set the IR emitter in a specific place of a room. It always emits Infrared ray.
2.I connect 4 IR receiver to my 4WD robot car - front, left, right, and back side.
3.They receive IR from the emitter. I earn the distance from the emitter to each receiver with the intensity of IR.
4.With these values, Arduino find out which receiver is closest from the emitter and choose the direction to go.
But I could't know this will be possible. Because IR is a kind of light ray, so I can't get the distance with the difference of arrival time(like Ultrasonic). I searched several kinds of IR sensors, but they were only for sensing the possibility of collision.
So my question is these..
Can I get the distance and the direction from IR emitter to my arduino device with an IR receiver?
If I can, then how many IR receivers do I need? And if I can't, what can I use to substitute IR emitters and receivers?
I guess IR can be interrupted because of sunlight or other light. So I guess I need some daylight filter. Do you think it's essential??
|
I am simulating a wheeled robot of six-wheels and can be independently steered, like MER-Opportunity.
The wheeled robot can perform throttling forward,
||---|| <--wheel orientation
|| ||
||---||
crab-motion,
//---// <--wheel orientation when heading is 45
// //
//---//
and turning on the spot.
//---\\ <--wheel orientation
|| ||
\\---//
My question is: Is it correct to say that I have 2 motion primitives? Throttling forward is basically crab-motion with heading zero.
|
I am writing a kinematics library in Go as part of my final year project. I am working with Product of Exponentials method and have successfully implemented the Forward Kinematics part of this. I need help with the Inverse Kinematics.
I understand the theoretical aspect of it. I would like a numerical example where actual numbers are used for the Paden-Kahan subproblems as the ones dealt in "A Mathematical Introduction to Robotic Manipulation - Murray,Li and Sastry" [freely-available online PDF].
I specifically need help with knowing what should p,q be when trying to solve the inverse kinematics. The book just says given, a point p,q around the axis of rotation of the joint. But how do you know these points in practice, like when the robot is actually moving, how do you keep track of these points? For these reasons I need a numerical example to understand it.
|
I've just started taking a robotics course and I am having a little problem.
I need to rotate the $O_i-1$ coordinate system into a position, where $X_i-1$ will be parallel with $X_i$.
The transformation matrix is given, but I have no idea how I can figure out this transformation matrix from the picture that can be found below.
Actually, I know why the last vector is [0 0 0 1] and the previous vector is [0 0 1 0], but I can't figure out why the first vector is [$\cos q_i$ $\sin q_i$ 0 0] and the second [$-\sin q_i$ $\cos q_i$ 0 0].
|
I was wondering if there was a good book or paper that surveys current techniques in local navigation? The earliest one I could find was from 2005 and I was hoping to find something more recent.
I have worked with certain approaches such as the dynamic window approach and the velocity obstacles approach. I'm hoping for some book or paper to give me a broader perspective to the problem of local navigation which I believe has been fairly robustly solved by a number of autonomous driving companies.
Thank you.
|
Im currently designing a robot for my undergraduate project. One of the task of this robot is to follow the wall. For the purpose I'm using a PID control system, where the reference is given from a ultrasonic sensor. So my problem here is im having a hard time tuning the PID. I know i can find the P coefficient pretty easily by plotting the desired set point range vs desired motor output speed. Even then the robot is not so stable, so i though of adding DI part of PID. But how do find out roughly the values of these coefficients without just trying out random values (manual tuning)? Thank you so much. Much appreciated.
|
I've build a simple wheeled robot based on two continuous servos, controlled by Raspberry Pi running ROS-groovy, with a smart phone mounted on top to provide additional sensors. I'd like to situate the bot in a room and have it move to various points on command. I don't have laser ranger finder but do have a good ultrasonic ranger finder and kinect sensors.
What are the typical ROS setup for this?
The idea I'm thinking is to personally (e.g. manually) map my room using kinect and use this map using only the ultrasonic range finder sensors and IMU in the lightweight robot. Would this be possible?
|
By watching this video which explains how to calculate the classic Denavit–Hartenberg parameters of a kinematic chain, I was left with the impression that the parameter $r_i$ (or $a_i$) will always be positive.
Is this true? If not, could you give examples where it could be negative?
|
So, I am designing a rover that will navigate to a rock, and then calculate the height of the rock. Currently, my team's design involves using an ultrasonic rangefinder and lots of math. I was interested in what sensors you would use to solve this problem, or how you would go about it? Assume that the rover has already located the rock.
Additional Info: We are using an Arduino Uno to control our rover. It is completely autonomous.
|
I'm building camera device which is able to take pictures of paragliders in mid air.
To let the camera know where the glider is I thought about using GPS data from the pilot's smartphone.
My question is: What are possible ways to transmit the GPS data to the groundstation and which can be considered a good solution?
I thought about sending the data to a server via some mobile network, but a direct communication solution would be preferable.
The pilot has mid-air pretty good mobile reception and the maximum distance between pilot and ground station is around 3km.
|
I am a web developer. I am fascinated by Quadrocopters and i am trying to learn how to build one and basically i am trying to jump into robotics fields. I don't have much electric circuit and electronics knowledge so i did some research on how to build and what type of knowledge you would require to develop such flying machine. So i started learning basics of electronics from Lessons In Electric Circuits by Tony R. Kuphaldt
The books are very interesting but i could not find a technique so that i can implement what i learn from the books. Basically i am just going through the stuffs, and understanding them little by little. What i want to know is that what is the right way and effective way to learn electronics and electric circuit from your experience and i should i do now so that i can increase my learning speed so that i can achieve my goal.
While i was researching i came across topics such as mathematical modelling and modelling the quadrocopters first and them implementing them on real. How can i gain such knowledge to model something mathematically and implement such in real life? How much math and what areas of mathematics do i need to learn and how can i learn such?
Now you have idea what i want to learn and achieve. Can you please suggest me a road map or steps i need to take to gain such knowledge and skill to develop myself, so that in near future i would be able to build such flying machines on my own.
|
Instantaneous rate of change of displacement is given by,
v(t) = (s(t + dt) - s(t))/dt, where dt tends to 0
while average rate of change of displacement is given by,
v(t) = (s(t[n]) - s(t[n-1]))/(t[n] - t[n-1])
The first one gives the slope or derivative of a displacement function at a particular instant of time and thus varies with time. I was wondering how is it going to help me calculate the velocity of my robot's end effector, which is the foot of the leg of the robot(bipedal). If i make a reading of the position of the robot every 1ms to keep the approximation as accurate as possible, my instantaneous velocity would be zero wouldn't it? Since my robot wouldn't have moved anywhere in 1ms time. Agreed, 't' would increment as t+dt, dt == 0.001s. Then v(t) would be v(0.001) = s(0.002) - s(0.001) which is zero, because there is no displacement in that small time frame, right? Am I doing something wrong here? Or on the other hand, do I just use average rate of change?
I have this question, since, if there is a manipulator, in my case the foot of my robot, and it's trajectory is given by a 3x3 homogenous matrix,
[{c(t),-s(t), 0},
{s(t), c(t), 0},
{ 0, 0, 1}], where c,s are cos(theta) and sin(theta)respectively,
if on paper, this is differentiated, this would give me a spatial/body velocity matrix as
[{0, -d(theta)/dt, 0},
{d(theta)/dt, 0, 0},
{ 0, 0, 0}]
So how do I compute this differentiation in code. I just need something of the sort of a pseudocode.
|
I'm building a robot that uses a beaglebone black, however I have several different usb devices that I want to connect to it (microphone, usb sound device and some other things). Now I have heard that the usb output of the beaglebone doesn't power more then 0.1A. So the combined draw of these usb devices is likely to exceed this by a fair margin. So I started looking for powered usb hubs to use instead. However these tend to be powered by 220V and my robot currently only has a 12V power supply and a converter to 5V for the beaglebone. Which given the size expense and inefficiency of converting power to 220 from 12V and then back again doesn't seem very good. Is there a good method for fixing this?
|
I'm trying to track a simple robot (e.g. arduino, raspberry pi, even toys) in a room using fixed location kinect sensor(s) and cameras at different parts of the room. How might one usually use to do this?
Edit 1: More specifically, I want to know the position (and if possible, orientation) of an moving object in the room using one or more cameras or depth sensors. I'm new to the area, but one idea might be to use blob or haar to detect the moving object and get its location from kinect depth-map, and I'm trying to find what package I can use for that end. But for navigation to work I'd have to pre-map the room manually or with kinect. I can put some sensors on this tracked moving object, e.g. IMU, sonar, but not a kinect. I am allowed full PCs running ROS/opencv/kinect sdk in the environment, and I can wirelessly communicate with the tracked object (which is presently a raspberry pi running ROS groovy on wheels)
|
We are working on a project which requires us to detect and hit a ball. We are trying to accomplish the task by detecting the position of ball by processing the input from a camera. The problem is that we are required to do this in very bright lights. The bright lights are making it difficult to detect the white colored ball.
Is there a way we can write the code such that it automatically reduces the intensity of lights in the image?
Is there an efficient way to extract only the V component from the HSV image?
We have very limited experience with image processing so any alternative approach to detecting the object will also be helpful
|
I am constructing a 5.44Kg Hobby-weight battle robot and one of the safety rules is that the robot must have a power switch that turns off power to the motors (and weapon).
The robot has three sub-systems; the drive motors (one battery), the weapon (another battery) and some lighting (a small 9 volt battery).
I have read that since all these will be connected to the same receiver it is important to have all the electronics sharing a common ground for everything to work properly.
Now I know that usually it is the "live" wire that is connected to the switch, but I was thinking of hitting two birds with one stone and connecting all the ground wires (rather than the live wires) to the switch. In this way I still turn off power and also have a common ground. In terms of safety (shorts) etc I am not too concerned because I am using XT 60 connectors and have been careful to use only female plugs for the power leads (so no prongs are visible).
It seems to me that it should work and still be safe enough especially since I am not dealing with mains voltage levels here, but on the other hand I don't want to look stupid.
Does this way of connecting to the switch make sense or am I violating some unwritten law? Is this normal practice? Would it effect the circuitry in any way to have the grounds connected together?
I was also thinking of using a switch from a PC power supply; as far as I know this is rated for reasonably high currents. In my case I will have 3 cordless motors, each of which might be drawing up to 5 amps when under load, so say 15 amps in all. Has anyone out there ever used such switches or did you buy high current ones? In that case what should I ask for?
Thanks.
|
I'm trying to import the tutorial robot given at this link
However this gives the following error:
Error [Param.cc:181] Unable to set value [1,0471975511965976] for key[horizontal_fov]
Error [Param.cc:181] Unable to set value [0,100000001] for key[near]
Error [parser_urdf.cc:2635] Unable to call parseURDF on robot model
Error [parser.cc:278] parse as old deprecated model file failed.
Error [parser_urdf.cc:2635] Unable to call parseURDF on robot model
Error [parser.cc:278] parse as old deprecated model file failed.
Error [parser.cc:278] parse as old deprecated model file failed.
This sugest something is wrong with parsing but does not actually point towards any line of my code (the example is only 103 lines long).
<link name='chassis'>
<pose>0 0 .1 0 0 0</pose>
<collision name='collision'>
<geometry>
<box>
<size>.4 .2 .1</size>
</box>
</geometry>
</collision>
<visual name='visual'>
<geometry>
<box>
<size>.4 .2 .1</size>
</box>
</geometry>
</visual>
<collision name='caster_collision'>
<pose>-0.15 0 -0.05 0 0 0</pose>
<geometry>
<sphere>
<radius>.05</radius>
</sphere>
</geometry>
<surface>
<friction>
<ode>
<mu>0</mu>
<mu2>0</mu2>
<slip1>1.0</slip1>
<slip2>1.0</slip2>
</ode>
</friction>
</surface>
</collision>
<visual name='caster_visual'>
<pose>-0.15 0 -0.05 0 0 0</pose>
<geometry>
<sphere>
<radius>.05</radius>
</sphere>
</geometry>
</visual>
</link>
<link name="left_wheel">
<pose>0.1 0.13 0.1 0 1.5707 1.5707</pose>
<collision name="collision">
<geometry>
<cylinder>
<radius>.1</radius>
<length>.05</length>
</cylinder>
</geometry>
</collision>
<visual name="visual">
<geometry>
<cylinder>
<radius>.1</radius>
<length>.05</length>
</cylinder>
</geometry>
</visual>
</link>
<link name="right_wheel">
<pose>0.1 -0.13 0.1 0 1.5707 1.5707</pose>
<collision name="collision">
<geometry>
<cylinder>
<radius>.1</radius>
<length>.05</length>
</cylinder>
</geometry>
</collision>
<visual name="visual">
<geometry>
<cylinder>
<radius>.1</radius>
<length>.05</length>
</cylinder>
</geometry>
</visual>
</link>
<joint type="revolute" name="left_wheel_hinge">
<pose>0 0 -0.03 0 0 0</pose>
<child>left_wheel</child>
<parent>chassis</parent>
<axis>
<xyz>0 1 0</xyz>
</axis>
</joint>
<joint type="revolute" name="right_wheel_hinge">
<pose>0 0 0.03 0 0 0</pose>
<child>right_wheel</child>
<parent>chassis</parent>
<axis>
<xyz>0 1 0</xyz>
</axis>
</joint>
</model>
</sdf>
This is on ubuntu 14.04. Is there any hint at what I'm doing wrong or what information can I provide to better come to a solution
|
I'm currently developing a SLAM software on a robot, and I tried the Scan Matching algorithm to solve the odometry problem.
I read this article :
Metric-Based Iterative Closest Point Scan Matching
for Sensor Displacement Estimation
I found it really well explained, and I strictly followed the formulas given in the article to implement the algorithm.
You can see my implementation in python there :
ScanMatching.py
The problem I have is that, during my tests, the right rotation was found, but the translation was totally false. The values of translation are extremely high.
Do you have guys any idea of what can be the problem in my code ?
Otherwise, should I post my question on StackOverflow or on the Mathematics Stack Exchange ?
The ICP part should be correct, as I tested it many times, but the Least Square Minimization doesn't seem to give good results.
The parts that might be problematic are the function getAXX() to getBX() (starting at line 91).
As you noticed, I used many decimal.Decimal values, cause sometimes the max float was not big enough to contain some values.
|
I have been working with the Velocity Obstacles concept. Recently, I came across a probabilistic extension of this and couldn't understand the inner workings.
Source: Recursive Probabilistic Velocity Obstacles for Reflective Navigation http://www.morpha.de/download/publications/FAW_ASER03_Kluge.pdf
What does the equation at the bottom and the top mean? Vij is the relative velocity of agent i to agent j. ri & ci and rj & cj are their respective radius and centers.
Update:
What does inf(ri + rj) and sup(ri + rj) mean? Does it mean that I should define a function that goes from 1 to 0 from inf to sup? And if not, then how do I calculate the value of PCC at any given point?
|
I'm reading this pdf. The dynamic equation of one arm is provided which is
$$
l \ddot{\theta} + d \dot{\theta} + mgL sin(\theta) = \tau
$$
where
$\theta$ : joint variable.
$\tau$ : joint torque
$m$ : mass
$L$ : distance between centre mass and joint.
$d$ : viscous friction coefficient
$l$ : inertia seen at the rotation axis.
I would like to use P (proportional) controller for now.
$$
\tau = -K_{p} (\theta - \theta_{d})
$$
My Matlab code is
clear all
clc
t = 0:0.1:5;
x0 = [0; 0];
[t, x] = ode45('ODESolver', t, x0);
e = x(:,1) - (pi/2); % Error theta1
plot(t, e);
title('Error of \theta');
xlabel('time');
ylabel('\theta(t)');
grid on
For solving the differential equation
function dx = ODESolver(t, x)
dx = zeros(2,1);
%Parameters:
m = 2;
d = 0.001;
L = 1;
I = 0.0023;
g = 9.81;
T = x(1) - (pi/2);
dx(1) = x(2);
q2dot = 1/I*T - 1/I*d*x(2) - 1/I*m*g*L*sin(x(1));
dx(2) = q2dot;
The error is
My question is why the error is not approaching zero as time goes? The problem is a regulation track, so the error must approach zero.
|
I bought 2 brushed motor controllers from China to use with my hobby-weight battle robot (http://www.banggood.com/ESC-Brushed-Speed-Controller-For-RC-Car-Truck-Boat-320A-7_2V-16V-p-915276.html).
These are intended for use with my 2 cordless drill motors which will be driving the left and right wheel respectively. The robot will therefore be steered in "tank mode" by varying the speed and direction of rotation of the 2 motors using the two joysticks on my Turnigy 9x transmitter.
My question is: I have seen videos on youtube where people calibrate brushless motor controllers (ESCs) using some system of pushing the joystick on a standard transmitter forward and listening to tones and then doing the same for reverse and so on.
However when I asked the suppliers about a similar procedure for these brushed controllers, all they could say is that they did not need calibration. The exact words were "It seems that you're talking about transmitter for copters,but this ESC is for RC car or boat. You pull the trigger, it goes forward, you push the trigger, it reverse. And you don't need to calibrate it, just plug it, then it can work."
My transmitter is not one of those gun shaped ones used for cars. So am I in trouble with these controllers or should they work correctly out of the box as the supplier seems to be implying?
You may fairly ask why have I not just tried this out and the simple answer is that my LIPO charger has not yet arrived and I therefore cannot power anything up as yet.
|
I bought 2 brushed motor controllers from China to use within my hobby-weight battle robot (http://www.banggood.com/ESC-Brushed-Speed-Controller-For-RC-Car-Truck-Boat-320A-7_2V-16V-p-915276.html).
These are intended for use with my 2 cordless drill motors which will be driving
the left and right wheel respectively. The robot will therefore be steered in
"tank mode" by varying the speed and direction of rotation of the 2 motors using
the two joysticks on my Turnigy 9x transmitter.
I am seeking to refine the model and make it easier to operate so does anyone know of a way in which I can somehow synchronize the motors to get a single joystick steering system? My transmitter has 9 available channels so if this is part of a solution then I am fine with it. I also have an Arduino available if needs be.
|
I bought 2 brushed motor controllers from China for my hobby-weight battle robot (http://www.banggood.com/ESC-Brushed-Speed-Controller-For-RC-Car-Truck-Boat-320A-7_2V-16V-p-915276.html).
These are intended for use with my 2 cordless drill 14.4v motors which will be driving the left and right wheel respectively.
I will be using 4S LIPOs which (when fully charged) have a voltage of 16.8V. Can someone put my mind at rest that the .8 volt excess is unlikely to damage the controller (which is rated for 7.2v - 16v)?
Also is the fact that the motor controllers are rated for 320Amp likely to damage my motors?
I am to be honest not very clear on current and how this is drawn from a LIPO battery. For instance would connecting a LIPO directly to my motor result in a massive discharge or does the motor just "take what it needs" in terms of current? Can someone maybe kindly point me to an article which casts some light on the subject or even more kindly explain it to me here?
|
I am building a drone using the raspberry pi and I am using 6*PID controllers to control the speed and the value for each angle, can I use a recurrent neural network (RNN) or other neural network to stabilize the angles. If so what can the training data be? What type of neural network (NN) is best suited for this kind of application?
|
As far as i understand, AHRS use orientation reference vectors to detect orientation error. And we can use magnetometer to correct yaw drift. But i see from my ak895 magnetometer that the data is not so stable, it kind of fluctuates continuously.
How can we use this data for AHRS algorithm?
|
Based on the wiki page of ESC, the ESC generally accepts a nominal 50 Hz PWM servo input signal whose pulse width varies from 1 ms to 2 ms
http://en.wikipedia.org/wiki/Electronic_speed_control
For our project, we integrate a flight controller for our UAV, Naza m-lite and we want to implement position control. We already have localization and we can control the quadrotor by applying servo width to roll, pitch, yaw and thrust throttle. Since the ESC only accepts 50 Hz, will the PID controller work at only 50 Hz?
|
The beaglebone black which I work on has only 2 i2c busses. Say for some crazy reason, i need to use 8 i2c busses. I dont want to chain any of the devices. The idea is to have every device's SDA line separated and use a shared SCL line so i can use the same clock and have as many SDA lines as i want. Since the SCL clock is hardware controlled there wont be any major issues here. The GPIO can do around 2.5mhz of switching so I am happy with that.
If that works out, i can spawn 8 threads to talk on 8 i2c Lines making my solution faster!
Do you think its doable? I would like to hear from you guys as this idea of using 1SCL and GPIO as SDA just popped in my head and i thought of sharing it with you guys.
Cheers!
|
I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics:
Maxon states a "nominal voltage" in the characteristic sheets. Is that the voltage you should apply to the motor? This may be a dumb question but I have followed the full maxon e-learning course and read about other tutorials on the web and I could not find this information anywhere. Can anyone who knows about motors confirm?
As far as I understand, the nominal torque corresponds to the maximum torque the motor can sustain continuously. So I guess, as a rule of thumb, I should find a motor with a nominal torque = my max torque (after reduction), or around. Right?
Also I chose a motor reference (310005 found here) which has a stated power of 60W, as the nominal voltage is 12V, I was expecting to have a nominal current of 5A, but it states 4A. Where am I wrong?
The motor I chose has nominal speed = 7630rpm - nominal torque = 51.6mNm. My needs are max speed = 50.42rpm / max torque = 10620 mNm. This means a reduction factor of 151 for speed and 206 for torque. Should I choose a gear closer to 151 or 206?
What is the "rated torque" mentioned when choosing a gear? I know my input torque (torque on the motor side) and my output torque (torque on the system side), does that correspond to any of these two?
I have followed some theoretical and practical courses on the web but I find it hard to find answers to my down to earth question...
Thanks,
Antoine.
|
We are using Naza-M-Lite for our flight controller without GPS. The localization is obtained through our RGB-D camera sensor. We are able to teleoperate and even implement PID controllers for Roll, Pitch, Yaw and Throttle channels for our quadrotor. However, we do not know the plant model because what we are inputting from Arduino to the Naza-M-Lite are servo PWM ranging from 1000 to 2000.
For throttle: 1500 altitude hold, 2000 maximum throttle, 1000
minimum throttle
For Pitch, Roll, Yaw: 1500 maintain 0 angle, 2000 and 1000 moves the
quadrotor towards its respective axes.
However, even at 1500 on every channel, the quadrotor drifts, maybe due to flying indoors and the wind pushes the quadrotor. Once it gains momentum, it drifts. We are having trouble tuning this because we do not know the relationship of the output is to the position. If the output were velocity, it would have been easier. But as in our case, it is not. Is there a way to find the plant model of the Naza-M-Lite and how can we tune this?
|
I am trying to create a model for the NAO [robot]'s motors. The figure below shows the step response for the knee motor. Afaik the NAO internally uses a pid controller to control the motor. I have no control over the pid or it's parameters. Thus I would like to treat the motor including pid as a black box. Theoretically it should be possible to model pid+motor as a $pt_2$ system, i.e. a second order lti system.
A $pt_2$ system is defined by the following differential equation:
$$T^2\ddot{y}(t) + 2dT\dot{y}(t)+y(t) = Ku(t)$$.
I tried fitting a $pt_2$ model but was unable to find good parameters.
Any idea what model to use for this kind of step response?
edit:
I tried modifying the equation to add a maximum joint velocity like this:
$$T^2\ddot{y}(t) + (\frac{2dT\dot{y}(t) + m - |2dT\dot{y}(t) - m|}{2})+y(t) = Ku(t)$$
where $m$ is the maximum velocity. The fraction should be equivalent to $min(2dT\dot{y}(t), m)$.
However I am not sure if this is the correct way to introduce a maximum joint velocity. The optimizer is unable to find good parameters for the limited velocity formula. I am guessing that is because the min() introduces an area where parameter changes do not cause any optimization error changes.
|
I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics, among them:
Question #1:
Maxon (or the other manufacturers) states a "nominal voltage" in the characteristic sheets. Is that the voltage you should apply to the motor? This may be a dumb question but I have followed the full maxon e-learning course and read about other tutorials on the web and I could not find this information anywhere. Can anyone who knows about motors confirm?
I have followed some theoretical and practical courses on the web but I find it hard to find answers to my down to earth question...
|
I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics, among them:
Question #2:
As far as I understand, the nominal torque corresponds to the maximum torque the motor can sustain continuously. So I guess, as a rule of thumb, I should find a motor with a nominal torque = my max needed torque (after reduction), or around. Right?
|
I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics, among them:
Question #3:
I chose a motor reference (310005 maxon reference found here) which has a stated power of 60W, as the nominal voltage is 12V, I was expecting to have a nominal current of 5A, but it states 4A. Where am I wrong?
|
I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics, among them:
Question #4:
The motor I chose (maxon brushed DC: 310005 found here) has nominal speed = 7630rpm - nominal torque = 51.6mNm. My needs are max speed = 50.42rpm / max torque = 10620 mNm. This means a reduction factor of 151 for speed and 206 for torque. Should I choose a gear closer to 151 or 206?
|
I have just sized the DC motors I want to use (corresponding to my robot and its intended applications - my figures include a 50% uncertainty factor to account for friction in reducers and other losses). Now I need to actually choose the exact motors I want to buy from the manufacturer (I am targeting maxon motors as I am not an expert and want no problem). I have a few down to earth questions about linking the mechanical needs to the electrical characteristics, among them:
Question #5:
What is the "rated torque" mentioned when choosing a gear? I guess it is related to the maximum torque the gear can support... But now, I know my input torque (torque on the motor side) and my output torque (torque on the system side), does that correspond to any of these two?
|
My yaw angle varies from -180 degree to 180 degree.
-170 170
-135 135
-90 90
45 45
10 -10
If my current heading is about 170 degree, then the wind makes it rotate to the left at about -170 degree, then how can PID control it to make it rotate back to the right at 170 degree.
Since, for PID ERROR = SETPOINT - INPUT
In my case, SETPOINT = 170, and INPUT = -170, the the ERROR = 170 - (-170) = 340.
So instead of moving to the right and apply PWM = 20, it rotate to the left and apply PWM = 340 and come back to the desired position, which is 170 degree?
|
I'm working on a robotics platform and we need an on-board Ubuntu machine to run ROS image recognition.
Does anyone know of a good set of computer hardware that has
NO screen
NO keyboard
Built-in battery (for charging separate from the robot)
Quite a bit of compute power (i5+, 4+ GB ram)
I thought about using a laptop, but the keyboard and screen are a lot of extra weight/volume I don't want to carry around. Something like an Intel NUC is appealing, but has no battery.
|
I am trying to localize an object in a point cloud using ROS, PCL. For that I capture the scene and model using Asus xtion pro sensor. I use RGBDSLAMv2 for capturing the model.
Then I use ICP (nonlinear version) to find the transform from the model to each cluster of the cloud. The cluster with the lowest score is chosen as the best matching cluster.
Pseudocode:
Segment the point cloud into different clusters. ([Using euclidean clustering][3])
for each cluster i
Source: 3dmodel. Target: current cluster
Perform [ICP (nonlinear version)][2].
score[i] = icp.getFitnessScore()
T[i] = icp.getFinalTransformation()
end for
matchingCluster = cluster with minimum score
finalT = T[matchingCluster]
However, I am not able to find the correct transformation.
Here are the screenshots of the results I got:
The red colored object is the transformed model overlayed onto the scene. The yellow object represents the original model in the coordinate system of the scene.
Now, my concern is why there is no proper transformation? Am I missing something?
Second, I see that the object model and scene are in different coordinate system. So the model appears inverted when presented in the scene's coordinate system. Is there a way in which I can transform the model upright before running ICP?
Thanks :)
|
I'm really willing to understand and implement such a controller (sliding mode) for a quadrotor.
I've found this interesting document explaining that topic.
If you scroll down until page 381 (don't be scared, the document is just 6-7 pages) you can find the following height control law (equation .19):
$$
U_1 = \frac{m}{\cos{\phi}\cos{\theta}}[c_1(\dot z_r - \dot z) + \ddot z_r + \epsilon_1 sgn(s_1) + k_1 s_1 + g]
$$
The explanation of most of the term should be quite easy, but let's focus on the variable z, the height (or altitude if absolute) of the quadrotor. Anyway the control law "pretends" not only the goal height z (through $s_{1}$) but even the vertical speed $\dot z_{r}$ and vertical acceleration $\ddot z_{r}$ (r means here reference).
Now...to me is not clear whether those variables the setpoints are, that must be reached once the quadrotor reaches its predefined height or they just symbolize an abstract mathematical formalism but are going to be most of the time Zero (because I want to reach the target height with $z = z_{r}$ but $\dot z_{r} = \ddot z_{r} = 0$)
?!?!?
I hope my question is clear. Even if this I put in the title "sliding control" I think it may be helpful for other type of controllers.
Regards
|
I want to build a robotic vacuum. I have a 400W 24V vacuum motor that I want to switch on automatically at a set time every night. The batteries I will be using will be 2x12V 80aH deep cycle gel batteries connected in series. I want the Arduino to switch the motor on and off. So my first real question I guess is will the 5V supplied from the Arduino be able to switch on a motor that big? The second question is a mosfet the answer? My apologies I'm pretty new to all this but love it..
Can I control a 400W motor with 24V 16A batteries with an Arduino board and a mosfet? What type of mosfet would I use?
|
I am trying to implement a monte carlo localization/particle filter localization with a simple range sensor. The range sensor only sees in the direction the robot is heading and returns back any obstacle in its line of sight. If there is no obstacle, then the sensor returns back the distance to the boundary wall i.e. there is no maximum range for sensor.
But, the problem is that i am not able to locate the robot's position. Now, I am feeling is it cause the sensor is not powerful enough. Is it feasible to do localization with such a sensor or should I change the sensor type?
Please tell me what you guys think?
|
A motor needs to spin n*360 degrees. On top of the motor there are distance sensor which scan the room. I.e. lidar. What options do I have for implementing a continous rotation while having cables which are in the way?
|
I have robot vision system which consists of conveyor with encoder, two cameras (gigabit eth and usb) and simple illuminator.
I need to trigger cameras and illuminator when encoder reaches position interval.
I'm considering using real time operating system for this task:
Encoder, illuminator and cameras connected to PC and vision system application runing on it.
Which real-time solution you can reccomend for this problem?
I'm considering using Beckhoff TwinCAT software which turns normal operating system into RT.
|
Ok apologies for those who think my questions are not direct enough as I got warned about this. I am really new to this and I will try to keep this one direct enough for the forum.
For obvious reasons I cannot test this out without damaging something so would prefer to learn from the experience of others.
I have a "Turnigy Trackstar 1/10 17.0T 2400KV Brushless" motor which I will be using for my weapon (spinning disk).
Relevant specs of the motor are:
Kv: 2400
Max Voltage: 21v
Max current:24amps
Watts: 550
Resistance: 0.0442Ohms
Max RPM: 50000
I will use this with an ESC with the following specs:
Constant Current: 30A
Burst Current: 40A
Battery: 2-4S Lipoly / 5-12s NiXX
BEC: 5v / 3A
Motor Type: Sensorless Brushless
Size: 54 x 26 11mm
Weight: 32g
Programming Functions:
Battery Type: Lipo /NiXX
Brake: On / Off
Voltage Protection: Low / Mid / High
Protection mode: Reduce power / Cut off power
Timing: Auto / High / Low
Startup: Fast / Normal / Soft
PWM Frequency: 8k / 16k
Helicopter mode: Off / 5sec / 15sec (Start up delay)
If the motor stalls, I know the current draw will increase drastically. So my questions are:
In the case that the motor stalls (my disk gets stuck in the opponent etc), then what gets damaged? The motor, the ESC, both? And how long before this happens?
Would I have time to turn the r/c switch off before irrevocable damage occurs (once I am obviously observing the action?). Notes. I will be using an on/off switch on the r/c to just turn the motor on and off (so no proportional speed increase etc), plus I will be using an 11.1 volt battery even though the motor is rated for a 21 volt maximum.
Thanks.
|
My problem is that when i hold my sensors (MPU9150) so that +y axis is downward, and y axis is on the horizontal plane, i expect that pitch = 90 degree, and roll = 0 degree, but actually pitch = 90 degree, and roll = 160 degree. However, when roll = 90 degree and pitch = 0 degree (That is what i expect). Do you know what cause my problem?
Thanks
|
Which of the following simulators is the best choice for simulating a swarm of AUVs working together to perform a mission? Please clarify your reason and if you know any better choice, I would greatly appreciate it if you kindly help me. Please consider the need for doing Hardware-In-The-Loop(HIL) simulation.
Webots
V-REP
AUV Workbench
Gazebo
UWSim
SwarmSimX
In addition, notice that capability to connect to the middle-wares like ROS is really important.
The other option is using a game engine like Blender but I think it needs a lot of developing effort and is time-consuming! Would you recommend this approach be used? If not, why not? And what would you recommend instead?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.