instruction
stringlengths 40
28.9k
|
---|
I'm stuck on equation 4.30 of page 176 in
http://www.cds.caltech.edu/~murray/books/MLS/pdf/mls94-complete.pdf
This equation:
$\frac {\partial M_{ij}} {\partial \theta_k} = \sum_{l=\max(i,j)}^n \Bigl( [A_{ki} \xi_i, \xi_k]^T A_{lk}^T {\cal M}_l' A_{lj} \xi_j + \xi_i^T A_{li}^T {\cal M}_l' A_{lk} [A_{kj} \xi_j, \xi_k] \Bigr)$
seems impossible to process because it requires adding a 2x1 to a 1x2 matrix.
going by ROWSxCOLUMNS notation. Matrices M and A are 6x6 and $\xi$ is a 6x1, so how does this addition statement fit the rules of matrix addition? This must be my mistake, I just don't see how.
|
I want to know if there is any equation that calculates the maximum force of a robot joint. The force that we should not exceed.
For example in human leg, if we apply a big external force to the knee, it will break. now how can i find the necessary force that will just make the leg move without breaking the knee.
I have a programme that generates robot morphologies randomly with different sizes, so I have to know the force to not exceed for each joint. I think this depend on weight, mass, inertia of each robot part.
I can not do this by trial and error because I have hundreds different morphologies.
This video shows the behaviour of robot if I apply a big force. It is in Gazebo robotic simulator.
Thanks in advance!
|
People have recommended me implement an analytic version of inverse Jacobian solver, such that I won't be forced only the least square solution, but would have an local area of solution near to the one I desire.
I can't seem to implement it correctly, I mean how much does it differ from the least square inverse kinematics which I have implemented here?
Eigen::MatrixXd jq(device_.get()->baseJend(state).e().cols(),device_.get()->baseJend(state).e().rows());
jq = device_.get()->baseJend(state).e(); //Extract J(q) directly from robot
//Least square solver - [AtA]⁻1AtB
Eigen::MatrixXd A (6,6);
A = jq.transpose()*(jq*jq.transpose()).inverse();
Eigen::VectorXd du(6);
du(0) = 0.1 - t_tool_base.P().e()[0];
du(1) = 0 - t_tool_base.P().e()[1];
du(2) = 0 - t_tool_base.P().e()[2];
du(3) = 0; // Should these be set to something if i don't want the tool position to rotate?
du(4) = 0;
du(5) = 0;
ROS_ERROR("What you want!");
Eigen::VectorXd q(6);
q = A*du;
cout << q << endl; // Least square solution - want a vector of solutions.
I want a vector of solution - how do I get that?
the Q is related to this https://robotics.stackexchange.com/questions/9672/how-do-i-construct-i-a-transformation-matrix-given-only-x-y-z-of-tool-position
The robot being used is a UR5 - https://smartech.gatech.edu/bitstream/handle/1853/50782/ur_kin_tech_report_1.pdf
|
I am working with SICK lidars and have to mount/unmount them quite often on my robot. The mounting process is very tedious especially when it comes to making sure that the lidars are horizontal. I thought about using IR goggles (like the night vision ones) and some fog machine (like the one in nightclubs) in order to see the surface covered by the lidar's rotating laser ray. As a result I would expect to see something like this but planar.
Before thinking about trying to get my hands on such hardware I wanted to ask:
Do sick laser have enough intensity to be observed by such goggles?
Does anybody tried such an approach?
|
I need to find out if There is a way to get at least 60 Hz of linear Motion with at least 5 mm of stroke that I intend to make linear persistence of vision device(not rotating one)It must be small and light as possible. ( maybe 50 mm long and 10-15 mm diameter or around these) (less than 500 grams) The Load will be around 50 grams. There are voice coils that is very expensive, can I use solenoids for instance or what do you recommend?
Thanks
|
The diagram below shows an old BEAMBot strategy:
Is there code or an example using this method? I would rather avoid OpenCV, ultrasonic, GPS etc. I just want the Roomba wheels to react as I go straight, turn left or right. Finally, I could add a front wheel on a servo and try having the Roomba turn with me.
Also has anybody added big, all terrain wheels to a Roomba to replace the originals?
|
I am going to build an autonomous robot with Kalman-filter for localization integrated from Lidar, encoder, IMU and GPS. I will also add obstacle avoidance while moving to the required position.
Is the ATmega32 8-bit suitable for that (or Arduino Mega) or do I have to use avr32, ARM, or PIC32 and which is better?
|
The issue is regarding simulation of Kuka robot KR16L6-2 in MATLAB using Robotics toolkit by Peter Corke. I wish to simulate the kinematic before passing a command to real robot for motion.
I have attached the DH-Parameters. Apart from this I have also tried many other combination of orientations but to no useful effect.
The problems is that the robot base rotates counter-clockwise by default for positive increases in Joint1, while the original robot moves in the opposite direction. Similarly for the wrist roll, i.e. Joint 4, the direction of simulation is reversed.
In order to confirm that it's not my mistake only, I searched for similar ready made simulation software. Although it did not include the same KUKA robot, a similar variant (KUKA_KR_5_sixx_R650) was available. Hence, KUKA_KR_5_sixx_R650 had one set of motions for base and wrist in RoKiSim v1.7 for positive increases in joint angle and reverse motion in roboanalyzerv7 .
NOTE: Only the rotation of J1 (base) and J4 (Wrist Roll) are reversed
and I want to recreate the results of RoKiSim v1.7 in Matlab where rotations are similar to the real world robot spec provided by KUKA.
|
I created a package in catkin workspace and put a publisher.py node inside the src directory of package which worked fine. Then i added another node subscriber.py node and used catkin_make to build. Now when I try to run any of the nodes or find package i am getting above error. Am I missing any step ?
Thanks.
|
I was recently looking into purchasing either a Dynamixel AX-12A or XL-320. The XL seems to use OLLO frames, which only seem to be available in a toy-like set.
I was wondering if there are any other frames available or if I should just get an AX-12?
|
I am part of my college robotics team which is preparing for Robocon 2017.
We have used Mecanum wheels in last Robocon competition, but we have faced huge slip and vibration. I have looked for all kinematic and dynamic formulas and all stuff about Mecanum wheels, but still can't get to a conclusion for my problem.
Video of the problem
The robot is around 25 kg and the Mecanum wheel diameter is about 16 cm with 15 rollers (single type). Please help me why it happened like that!?
Also suggest me what to do now - Should I design a new Mecanum wheel or bring it from market?
If I should design, what parameters should I consider, and please help me how to design in CAD software like SolidWorks? And then, shall I give it to 3D printing?
If I should buy directly from market, where should I buy?
|
I tried to disable sleep by pulsing the BRC pin low for one second every minute as suggested in the OI, but my Create 2 still goes to sleep after 5 minutes.
My firmware is r3_robot/tags/release-3.2.6:4975 CLEAN
The Create 2 is connected to an Arduino, and the BRC is driven by one of the Arduino pins. I verified on a DMM that the voltage is indeed toggling. I am able to both send and receive serial data between the Arduino and Create2.
Pseudo-code:
Initialize roomba. Connect serial at 115200 baud. Toggle BRC: high for 200 ms, low for 200 ms, then high again. Leave it high.
Ask roomba to stream sensor data in passive mode. Wait 1 second after BRC toggle to give some extra time to wake-up. Then send opcode 7 (reset), wait for reset message to complete by looking for the last few characters, then wait another second for good measure. Next, send opcode 128 (start into passive mode), wait 100 ms to let opcode stick, then ask for stream of data (opcode 148 followed by number of packet IDs and the packet IDs themselves).
Main loop: Echo data from Create2 to the serial-USB output of the Arduino so that I can view the Create2 data. The data sent by the Create2 look valid (good checksum) and are sent in the expected time interval of ~15 ms. The main loop also toggles the BRC low for 1 second every minute.
For the full gory details, the complete Arduino sketch is shown below
const uint8_t brcPin = 2; // Must keep this low to keep robot awake
long last_minute = 0;
long minute = 0;
// Initialize roomba
void roomba_init()
{
Serial3.begin(115200); // Default baud rate at power up
while (!Serial3) {} // Wait for serial port to connect
// BRC state change from 1 to 0 = key-wakeup
// keep BRC low to keep roomba awake
pinMode(brcPin, OUTPUT);
Serial.println("BRC HIGH");
digitalWrite(brcPin, HIGH);
delay(200); // 50-500 ms
Serial.println("BRC LOW");
digitalWrite(brcPin, LOW);
delay(200);
Serial.println("BRC HIGH");
digitalWrite(brcPin, HIGH);
last_minute = millis()/60000;
delay(1000); // give some extra time to wake up after BRC toggle.
Serial.println("Opcode 7: reset robot");
Serial3.write(7); // Reset robot
// Discard roomba boot message
// Last part of reset message has "battery-current-zero 257"
char c = 'x';
Serial.println("Gimme a z!");
while (c != 'z') {
if (Serial3.available() > 0) {c = Serial3.read(); Serial.write(c);}
}
Serial.println("Gimme a e!");
while (c != 'e') {
if (Serial3.available() > 0) {c = Serial3.read(); Serial.write(c);}
}
Serial.println("Gimme a r!");
while (c != 'r') {
if (Serial3.available() > 0) {c = Serial3.read(); Serial.write(c);}
}
Serial.println("Gimme a o!");
while (c != 'o') {
if (Serial3.available() > 0) {c = Serial3.read(); Serial.write(c);}
}
// Flush remaining characters: 32 50 53 54 13 10 or " 257\r\n"
Serial.println("Gimme a newline!");
while (c != 10) {
if (Serial3.available() > 0) {c = Serial3.read(); Serial.write(c);}
}
delay(1000); // allow extra time for opcode 7 to stick
Serial.println("\nOpcode 128: start OI in passive mode");
Serial3.write(128); // Start the Open Interface. Passive mode.
delay(100); // Allow some time for opcode 128 to stick (not sure if this is needed)
Serial.println("Opcode 148: stream data packets");
Serial3.write(148); // Stream data packets (every 15 ms)
Serial3.write(16); // Number of packet IDs
Serial3.write(8); // Packet ID 8 = wall 1 byte
Serial3.write(9); // Packet ID 9 = cliff left 1
Serial3.write(10); // Packet ID 10 = cliff front left 1
Serial3.write(11); // Packet ID 11 = cliff front right 1
Serial3.write(12); // Packet ID 12 = cliff right 1
Serial3.write(13); // Packet ID 13 = virtual wall 1
Serial3.write(27); // Packet ID 27 = wall signal 2
Serial3.write(28); // Packet ID 28 = cliff left signal 2
Serial3.write(29); // Packet ID 29 = cliff front left signal 2
Serial3.write(30); // Packet ID 30 = cliff front right signal 2
Serial3.write(31); // Packet ID 31 = cliff right signal 2
Serial3.write(41); // Packet ID 41 = velocity right 2
Serial3.write(42); // Packet ID 42 = velocity left 2
Serial3.write(43); // Packet ID 43 = encoder counts left 2
Serial3.write(44); // Packet ID 44 = encoder counts right 2
Serial3.write(45); // Packet ID 45 = light bumper 1
}
void setup() {
// Open serial communications (through USB interface)
// The serial output of the Create 2 is echoed from Serial3 to Serial
// so that we can observe the Create 2 serial output on a computer.
Serial.begin(115200);
while (!Serial) {} // Wait for serial port to connect
Serial.println(F("Starting roomba test...\n"));
// Roomba serial commmunications
Serial.println(F("Initializing comm to Roomba\n"));
roomba_init();
}
long low_start_time;
boolean brc_is_low;
void loop() {
// Read from Serial3 and echo results to Serial
if (Serial3.available()) {
uint8_t b = Serial3.read();
uint8_t checksum = 19;
if (b==19) { // First byte of reply stream is 19
Serial.print("\nStart at ");
Serial.println(millis());
Serial.print(b); Serial.print(" ");
while (Serial3.available() < 43) {} // Wait for rest of data (buffer is 64 bytes)
for (int I=0; I<43; I++) {
b = Serial3.read();
Serial.print(b); Serial.print(" ");
checksum += b;
}
Serial.print("Chksum ");
Serial.println(checksum); // 0 is good
} else {
// Probably an ascii message
//Serial.write(b);
Serial.print(b); Serial.print(" ");
}
}
// Pulse BRC low every minute for 1 second
long now = millis();
long minute = now/60000;
if (minute != last_minute) {
Serial.println("\n\nBRC LOW");
Serial.println(millis());
digitalWrite(brcPin, LOW);
last_minute = minute;
low_start_time = now;
brc_is_low = true;
}
// 1 s low pulse width
if ((now > low_start_time + 1000) && brc_is_low) {
Serial.println("\n\nBRC HIGH");
Serial.println(millis());
digitalWrite(brcPin, HIGH);
brc_is_low = false;
}
}
|
I think i have an simple problem, but can't my head around how i should resolve it...
My setup looks like this:
The grey box on end effector is supposed to be an camera, which measures a dx,dy,dz between the object and the camera. These are used to position the camera such that dz between the object and the camera is equal to 0.5, and dx = dy = 0.
I know that I using inverse kinematics can determine the Q which positions it according the given rotation and position, but what if I only provide it a position only?
How do extract all Q that make dx = dy = 0, and dz = 0.5, while keeping the object in sight at all time?
An example could be if an object was placed just above the base (see second image), it should then find all possible configurations which in this case would consist of the arm rotating around the object, while the camera keeps the object in sight...
Update
I just realized a possible solution would be to create a sphere with the object in centrum an radius of dz, and then use this sphere to extract all pairs of rotations and position... But how would one come by with such an solution?
|
How do i compute all transformation matrices which places a robot endeffector at the shell of this sphere, with the end effector pointing toward the object in the center.
I know at all time how far the object is relative to the endeffector, and radius of the sphere is the desired distance i want between the object and endeffector.
I want by using inverse kinematics pan around this object in a sphere shaped trajectory.
Each transformation matrix should contain different positions on the sphere and the rotation should be oriented such that the arm looks at the object.
The position should be relative easy to compute, as i already know the distance to to object, and radius of the sphere.
But the rotation matrix for each position is still a mystery for me.
|
I have an UAV modeled in three dimensions with let's say position coordinates $p_{uav} = (x_1,y_1,z_1)$ that is moving in a direction $d = (d_x,d_y,d_z)$ and a moving obstacle modeled as a sphere with known centre coordinates $p_{sph}=(x_2,y_2,z_2)$ and radius $ r_{sph}$.
If I have a plane $p$ in the direction of movement of the UAV that intersects the sphere, I want to be able to calculate the angles with respect to the vehicle's movement formed by the tangents to the sphere in the plane $ p$. In the figure, I would like to know how to calculate the angles $α_1$ and $α_2$.
If it helps, what I am looking is an extension in three dimensions for this:
Which is a vehicle in two dimensions ;it is obviously an easier problem which requires only the centre of the circle. However I am not really sure how to make it work in 3D, as supposedly the plane can intersect the sphere at any two points, not necessarily the centre.
Thanks in advance for your help.
|
I am having some issue with implementing a least square solution of the inverse kinematics problem.
The q configuration I get are rather large, or makes no sense, so I was hoping someone here could help me find my error in my program.
rw::math::Q pathPlanning::invKin(double dx, double dy , double dz)
{
rw::kinematics::State state = this->state;
rw::math::Transform3D<> t_tool_base = this->device.get()->baseTend(state);
cout << t_tool_base.R().e() << endl;
cout << endl;
cout << t_tool_base.P().e() << endl;
cout << endl;
Eigen::MatrixXd jq(this->device.get()->baseJend(state).e().cols(), this->device.get()->baseJend(state).e().rows());
jq = this->device.get()->baseJend(state).e();
//Least square solver - dq = [j(q)]T (j(q)[j(q)]T)⁻1 du <=> dq = A*du
Eigen::MatrixXd A (6,6);
//A = jq.transpose()*(jq*jq.transpose()).inverse();
A = (jq*jq.transpose()).inverse()*jq.transpose();
std::vector<rw::math::Transform3D<> > out = sphere(dx,dy,dz);
std::ofstream outfile;
outfile.open("q_conf.txt", std::ios_base::app);
for(unsigned int i = 0; i <= out.size() ; ++i )
{
rw::math::Vector3D<> dif_p = out[i].P()-t_tool_base.P();
Eigen::Matrix3d dif = out[i].R().e()- t_tool_base.R().e();
rw::math::Rotation3D<> dif_r(dif);
rw::math::RPY<> dif_rot(dif_r);
Eigen::VectorXd du(6);
du(0) = dif_p[0];
du(1) = dif_p[1];
du(2) = dif_p[2];
du(3) = dif_rot[0];
du(4) = dif_rot[1];
du(5) = dif_rot[2];
Eigen::VectorXd q(6);
q = A*du;
rw::math::Q q_current;
q_current = this->device->getQ(this->state);
rw::math::Q dq(q);
rw::math::Q q_new = q_current+ dq;
//cout << jq << endl;
//cout << endl;
//std::string text = "setQ{" + to_string(q_new[0]) + ", " + to_string(q_new[1]) + ", " + to_string(q_new[2]) + ", " + to_string(q_new[3]) + ", " + to_string(q_new[4]) + ", " + to_string(q_new[5]) + "}";
//cout << text << endl;
//outfile << text << endl;
}
rw::math::Q bla(6); //Just used the text file for debugging purposes, Which why I just return a random Q config.
return bla;
}
rw::math::Transform3D<> pathPlanning::transform(double obj_x, double obj_y, double obj_z, double sphere_x, double sphere_y ,double sphere_z)
{
// Z-axis should be oriented towards the object.
// Rot consist of 3 direction vector [x,y,z] which describes how the axis should be oriented in the world space.
// Looking at the simulation the z-axis is the camera out. X, and Y describes the orientation of the camera.
// The vector are only for direction purposes, so they have to be normalized....
// TODO: case [0 0 -1]... Why is it happening at what can be done to undo it?
rw::math::Vector3D<> dir_z((obj_x - sphere_x), (obj_y - sphere_y), (obj_z - sphere_z));
dir_z = normalize(dir_z);
rw::math::Vector3D<> downPlane(0.0,0.0,-1.0);
rw::math::Vector3D<> dir_x = cross(downPlane,dir_z);
dir_x = normalize(dir_x);
rw::math::Vector3D<> dir_y = cross(dir_z,dir_x);
dir_y = normalize(dir_y);
rw::math::Rotation3D<> rot_out (dir_x,dir_y,dir_z);
rw::math::Vector3D<> pos_out(sphere_x,sphere_y,sphere_z);
rw::math::Transform3D<> out(pos_out,rot_out);
return out;
}
std::vector<rw::math::Transform3D<>> pathPlanning::sphere(double dx, double dy, double dz)
{
double r = 0.50; // Radius of the sphere - set to 0.50 cm (TODO: has to be checked if that also is accurate)
cout << "Create a sphere" << endl;
double current_x = this->device->baseTend(this->state).P()[0];
double current_y = this->device->baseTend(this->state).P()[1];
double current_z = this->device->baseTend(this->state).P()[2];
rw::math::Vector3D<> center(current_x + dx, current_y + dy , current_z + dz);
// Formula for sphere (x-x0)²+(y-y0)²+(z-z0)²=r²
// x: x = x_0 + rcos(theta)sin(phi)
// y: y = y_0 + rsin(theta)sin(phi)
// z: z = z_0 + rcos(phi)
// Angle range: 0 <= theta <= 2M_PI ; 0 <= phi <= M_PI
double obj_x = current_x + dx;
double obj_y = current_y + dy;
double obj_z = current_z + dz;
ofstream positions;
ofstream rotations_z;
ofstream rotations_y;
ofstream rotations_x;
positions.open ("sphere_positions.csv");
rotations_z.open("z_dir.csv");
rotations_y.open("y_dir.csv");
rotations_x.open("x_dir.csv");
std::vector<rw::math::Transform3D<>> out;
int count = 32;
for(double theta = 0; theta <= 2*M_PI ; theta+=0.1 )
{
for(double phi = 0; phi <= M_PI ; phi+=0.1)
{
double sphere_x = obj_x + r*cos(theta)*sin(phi);
double sphere_y = obj_y + r*sin(theta)*sin(phi);
double sphere_z = obj_z + + r*cos(phi);
string text = to_string(sphere_x) + " , " + to_string(sphere_y)+ " , " + to_string(sphere_z);
positions << text << endl;
rw::math::Transform3D<> transformation_matrix = transform(obj_x,obj_y,obj_z,sphere_x,sphere_y,sphere_z);
string text2 = to_string(transformation_matrix.R().e()(0,2)) + " , " + to_string(transformation_matrix.R().e()(1,2)) + " , " + to_string(transformation_matrix.R().e()(2,2));
string text1 = to_string(transformation_matrix.R().e()(0,1)) + " , " + to_string(transformation_matrix.R().e()(1,1)) + " , " + to_string(transformation_matrix.R().e()(2,1));
string text0 = to_string(transformation_matrix.R().e()(0,0)) + " , " + to_string(transformation_matrix.R().e()(1,0)) + " , " + to_string(transformation_matrix.R().e()(2,0));
rotations_z << text2 << endl;
rotations_y << text1 << endl;
rotations_x << text0 << endl;
if(count == 32) //TODO: Why...... is this occuring?
{
//cout << "Theta: " << theta << " Phi: " << phi << endl;
//cout << sphere_x << " , " << sphere_y <<" , "<< sphere_z << endl;
count = 0;
}
else
{
count++;
}
out.push_back(transformation_matrix);
}
}
positions.close();
rotations_z.close();
rotations_y.close();
rotations_x.close();
cout << endl;
cout <<"Object at: " << obj_x << "," << obj_y << "," << obj_z << endl;
cout << "done " << endl;
return out;
}
What am I trying to do, I am trying to orbit a robot endeffector around an object in the center. The trajectory of the endeffector is an sphere where the endeffector should always point in to the object. The sphere function should compute all transformation matrices which move the robot arm to the different position on the sphere with a given rotation, and the inverse kinematics should compute all the different Q-states, given an x,y,z which is the actual displacement to the object itself.
I am not quite sure where my error could be at, but I think it might either be at transform function where I generate my desired transformation matrix, or in invKin where I create du, I think I might have made an mistake in creating du(3), du(4), du(5)
The libraries I've been using is Eigen, robwork (basically all rw::) if anyone want to look syntax through.
Update
Based on @ghanimmukhtar I began checking for singularities for the jacobian.. Which seems in general supringsly low. I computed it for a list of random Q configurations which resulted into this...
Determinant: -0.0577779
Determinant: -0.0582286
Determinant: 0.0051402
Determinant: -0.0498886
Determinant: 0.0209685
Determinant: 0.00372222
Determinant: 0.047645
Determinant: 0.0442362
Determinant: -0.0799746
Determinant: 0.00194714
Determinant: 0.0228195
Determinant: 0.096449
Determinant: -0.0339612
Determinant: -0.00365521
Determinant: -0.030022
Determinant: 0.021347
Determinant: 0.0413364
Determinant: 0.0041136
Determinant: -0.0151192
Determinant: 0.0682926
Determinant: -0.0657176
Determinant: 0.0915473
Determinant: -0.00516008
Determinant: -0.0394664
Determinant: -0.00469664
Determinant: 0.0494431
Determinant: -0.00156804
Determinant: -0.0402393
Determinant: -0.0141511
Determinant: 0.0203508
Determinant: -0.0368337
Determinant: -0.0313431
Determinant: -0.0566811
Determinant: -0.00766113
Determinant: -0.051767
Determinant: -0.00815555
Determinant: 0.0564639
Determinant: 0.0764514
Determinant: -0.0501299
Determinant: -0.00056537
Determinant: -0.0308103
Determinant: -0.0091592
Determinant: 0.0602148
Determinant: -0.0051255
Determinant: 0.0426342
Determinant: -0.0850566
Determinant: -0.0353419
Determinant: 0.0448761
Determinant: -0.0103023
Determinant: -0.0123843
Determinant: -0.00160566
Determinant: 0.00558663
Determinant: 0.0173488
Determinant: 0.0170783
Determinant: 0.0588363
Determinant: -0.000788464
Determinant: 0.052941
Determinant: 0.064341
Determinant: 0.00084967
Determinant: 0.00716674
Determinant: -0.0978426
Determinant: -0.0585773
Determinant: 0.038732
Determinant: -0.00489957
Determinant: -0.0460029
Determinant: 0.00269656
Determinant: 0.000600619
Determinant: -0.0408527
Determinant: -0.00115296
Determinant: 0.013114
Determinant: 0.0366423
Determinant: 0.0495209
Determinant: -0.042201
Determinant: -0.036663
Determinant: -0.103452
Determinant: -0.0119054
Determinant: 0.0692284
Determinant: -0.00717832
Determinant: 0.00729104
Determinant: 0.0126415
Determinant: -0.00515246
Determinant: -0.0556505
Determinant: 0.000670701
Determinant: -0.0545629
Determinant: 0.00251946
Determinant: 0.0405189
Determinant: 0.010928
Determinant: -0.00101032
Determinant: 0.0308612
Determinant: 0.0536183
Determinant: -0.0439223
Determinant: -0.0113453
Determinant: -0.0193872
Determinant: 0.0660165
Determinant: -0.00184695
Determinant: -0.106904
Determinant: 0.01246
Determinant: -0.00883772
Determinant: 0.0601036
Determinant: 0.0468602
Determinant: 0.0513812
Determinant: -0.000663089
Determinant: -0.00392395
Determinant: 0.0710837
Determinant: 0.0629583
Determinant: -0.0464579
Determinant: 0.0257618
Determinant: -0.0193227
Determinant: 0.00388693
Determinant: -0.02003
Determinant: 0.0191158
Determinant: -0.00159198
Determinant: -0.0702308
Determinant: -0.0242876
Determinant: -0.00934638
Determinant: -0.00221986
Determinant: -0.0268925
Determinant: 0.0596055
Determinant: -0.00925273
Determinant: -0.0167357
Determinant: 0.0596476
Determinant: -0.00515798
Determinant: -0.00324081
Determinant: -0.00321565
Determinant: 0.0669645
Determinant: -0.0342913
Determinant: -0.000342155
Determinant: -0.0104422
Determinant: -0.0410489
Determinant: -0.0246036
Determinant: 0.0208562
Determinant: -0.0692963
Determinant: 0.000839091
Determinant: -0.049308
Determinant: -0.0349338
Determinant: 0.0016057
Determinant: -0.00214381
Determinant: -0.0332965
Determinant: 0.0168007
Determinant: -0.0748581
Determinant: -0.00864737
Determinant: -0.0638044
Determinant: -0.00103911
Determinant: -0.00690918
Determinant: 0.000285789
Determinant: 0.0215414
Determinant: 0.0560827
Determinant: -0.0063201
Determinant: -0.00677609
Determinant: -0.00686829
Determinant: 0.0591599
Determinant: 0.0112705
Determinant: 0.0874784
Determinant: -0.0146124
Determinant: -0.0133718
Determinant: -0.0203801
Determinant: -0.0150386
Determinant: -0.102603
Determinant: -0.077111
Determinant: 0.021146
Determinant: 0.089761
Determinant: -0.0532867
Determinant: -0.0620632
Determinant: -0.0165414
Determinant: -0.0461426
Determinant: 0.00144256
Determinant: 0.00844777
Determinant: 0.0893306
Determinant: -0.0814478
Determinant: -0.0890507
Determinant: -0.0472091
Determinant: 0.0186799
Determinant: -0.00224087
Determinant: -0.0242662
Determinant: -0.00195303
Determinant: 0.014432
Determinant: 0.00185717
Determinant: -0.0354357
Determinant: -0.0427957
Determinant: -0.0380409
Determinant: 0.0627548
Determinant: 0.0397546
Determinant: 0.0570439
Determinant: 0.106265
Determinant: 0.0382001
Determinant: -0.0240826
Determinant: -0.0866264
Determinant: 0.024184
Determinant: 0.0841286
Determinant: -0.0303611
Determinant: -0.0337029
Determinant: -0.0202875
Determinant: 0.0643731
Determinant: -0.0475265
Determinant: -0.00928736
Determinant: -0.00373402
Determinant: 0.0636828
Determinant: 0.0122532
Determinant: 0.0398141
Determinant: -0.0563998
Determinant: -0.0778303
Determinant: 0.0164747
Determinant: 0.0314815
Determinant: 0.0744507
Determinant: -0.0897675
Determinant: 0.0260324
Determinant: -0.0734512
Determinant: 0.000234548
Determinant: -0.0238522
Determinant: -0.0849523
Determinant: 0.0204877
Determinant: -0.0715147
Determinant: 0.0703858
Determinant: -0.0142186
Determinant: -0.101503
Determinant: 0.03966
Determinant: 4.69111e-05
Determinant: 0.0394428
Determinant: 0.0409131
Determinant: 8.90995e-05
Determinant: -0.00841189
Determinant: -0.0671323
Determinant: 0.00805167
Determinant: -0.00292435
Determinant: 0.0507716
Determinant: 0.0493995
Determinant: 0.00629414
Determinant: -0.0428982
Determinant: -0.0446924
Determinant: 0.0776236
Determinant: 0.00440478
Determinant: -0.0463321
Determinant: -0.00247224
Determinant: -0.0199861
Determinant: 0.0267022
Determinant: 0.0184179
Determinant: 0.0104588
Determinant: 0.116535
Determinant: -0.0857382
Determinant: -0.0477216
Determinant: 0.0286968
Determinant: 0.0387932
Determinant: 0.042856
Determinant: -0.0964
Determinant: 0.0320456
Determinant: -0.0676327
Determinant: 0.0156632
Determinant: 0.0548582
Determinant: 0.0394791
Determinant: 0.0863353
Determinant: -0.0568753
Determinant: -0.00953039
Determinant: -0.0534666
Determinant: 0.0506779
Determinant: 0.00521034
Determinant: 0.0353338
Determinant: 0.0845463
Determinant: -0.00847695
Determinant: 0.015726
Determinant: -0.0648035
Determinant: 0.0170917
Determinant: 0.0045193
Determinant: -0.0195397
Determinant: 0.00630076
Determinant: -0.0137401
Determinant: 0.0209229
Determinant: 0.00382077
Determinant: -0.0588661
Determinant: -0.0923883
Determinant: -0.00726003
Determinant: -0.0411533
Determinant: 0.00544489
Determinant: 0.0101791
Determinant: 0.0903306
Determinant: -0.0590416
Determinant: -0.0377112
Determinant: -0.0150455
Determinant: 0.0793066
Determinant: 0.0425759
Determinant: -0.040728
Determinant: -0.0376792
Determinant: -0.0387703
Determinant: -0.0232208
Determinant: 0.0506747
Determinant: -0.0284409
Determinant: 0.000536999
Determinant: -0.0289103
Determinant: -0.00586449
Determinant: -0.0805586
Determinant: 0.0133906
Determinant: -0.00311773
Determinant: 0.0184798
Determinant: -0.00981978
Determinant: -0.0491601
Determinant: 0.0452526
Determinant: 0.00411708
Determinant: -0.0515142
Determinant: 0.0121114
Determinant: 0.00636972
Determinant: -0.0126048
Determinant: -0.0412662
Determinant: 0.00195264
Determinant: -0.0726478
Determinant: 0.0692254
Determinant: -0.0256477
Determinant: 0.0702529
Determinant: -0.0052493
Determinant: 0.0625172
Determinant: 0.00282606
Determinant: 0.0229033
Determinant: 0.0558893
Determinant: 0.0766217
Determinant: -0.00388679
Determinant: -0.0193821
Determinant: -0.00718189
Determinant: -0.0864566
Determinant: 0.0809026
Determinant: -0.0398232
Determinant: -0.00224801
Determinant: 0.0333072
Determinant: -0.0212002
Determinant: 0.00371396
Determinant: 0.0162035
Determinant: -0.0811845
Determinant: 0.0148128
Determinant: 0.0372953
Determinant: 0.00351286
Determinant: -0.00103575
Determinant: 0.0384813
Determinant: 0.00752738
Determinant: -0.0248252
Determinant: -0.106768
Determinant: -0.0192333
Determinant: -0.026543
Determinant: -0.0222608
Determinant: -0.0487862
Determinant: 0.00376402
Determinant: -0.0329469
Determinant: 0.00266775
Determinant: 0.0762491
Determinant: 0.0159609
Determinant: -0.0190175
Determinant: -0.0338969
Determinant: -0.0631867
Determinant: -0.0238901
Determinant: 0.107709
Determinant: -7.74935e-05
Determinant: -0.0468996
Determinant: 0.0462787
Determinant: 0.0387825
Determinant: 0.0753388
Determinant: -0.000279933
Determinant: 0.00638663
Determinant: -0.00458034
Determinant: 0.0185849
Determinant: -0.00543503
Determinant: -0.0520309
Determinant: -0.0234638
Determinant: 0.0593986
Determinant: -0.00036774
Determinant: 0.00960819
Determinant: -0.00685314
Determinant: -0.000176925
Determinant: 0.0207583
Determinant: -0.0337003
Determinant: -0.0534818
Determinant: 0.0142158
Determinant: -0.0728077
Determinant: 0.0246877
Determinant: -0.0660952
Determinant: -0.0466
Determinant: 0.0915457
Determinant: -0.00340539
Determinant: 0.00815076
Determinant: -0.0751806
Determinant: -0.00617677
Determinant: 0.0019761
Determinant: -0.0016673
Determinant: 0.0310364
Determinant: 0.0483121
Determinant: -0.00664964
Determinant: 0.0659273
Determinant: -0.019015
Determinant: 0.0087627
Determinant: 0.0267279
Determinant: 0.0253497
Determinant: 0.00246292
Determinant: -0.0684746
Determinant: -0.0234524
Determinant: -0.0197933
Determinant: 0.0120796
Determinant: -0.0192703
Determinant: 0.0853956
Determinant: 0.0388196
Determinant: -0.0599305
Determinant: -0.0626148
Determinant: 0.0258541
Determinant: -0.0341273
Determinant: 0.0972889
Determinant: -0.0306585
Determinant: 0.0188553
Determinant: 0.00247702
Determinant: -0.00368989
Determinant: -0.0951982
Determinant: 0.0113578
Determinant: 0.000762509
Determinant: -0.0225219
Determinant: 0.0414059
Determinant: -0.0244409
Determinant: -0.0425728
Determinant: 0.04275
Determinant: -0.0413427
Determinant: -0.00556264
Determinant: -0.0894398
Determinant: -0.0193197
Determinant: -0.00788038
Determinant: -0.00455421
Determinant: -0.0788177
Determinant: 0.0415381
Determinant: -0.0346766
Determinant: -0.0748027
Determinant: 0.0087688
Determinant: -0.0968796
Determinant: 0.0683526
Determinant: -0.00996678
Determinant: 0.00955922
Determinant: -0.0914706
Determinant: 0.0728304
Determinant: 0.0541784
Determinant: 0.0457072
Determinant: -0.0299529
Determinant: -0.0096473
Determinant: -0.0142643
Determinant: -0.0684794
Determinant: 0.00281004
Determinant: -0.03252
Determinant: -0.0144637
Determinant: 0.0294154
Determinant: 0.00574353
Determinant: -0.019569
Determinant: 0.00492446
Determinant: -0.0526394
Determinant: -0.000870143
Determinant: -0.0180984
Determinant: -0.0144104
Determinant: 0.0456077
Determinant: -0.0113433
Determinant: 0.00377549
Determinant: -0.0775854
Determinant: -0.0336789
Determinant: -0.0744995
Determinant: -0.0427397
Determinant: 0.0300061
Determinant: -0.0326518
Determinant: -0.0333735
Determinant: -0.0284057
Determinant: -0.00999835
Determinant: -0.0380404
Determinant: 0.00648521
Determinant: 0.0449298
Determinant: 0.0120318
Determinant: -0.0230653
Determinant: -0.00934067
Determinant: -0.0175326
Determinant: -0.0799447
Determinant: 0.0679027
Determinant: -0.00670324
Determinant: -0.0841748
Determinant: 0.0236213
Determinant: 0.0386624
Determinant: -0.0239495
Determinant: 0.076976
Determinant: -0.00997484
Determinant: 0.025157
Determinant: -0.0654046
Determinant: 0.0090564
Determinant: 0.00129045
Determinant: -0.105119
Determinant: 0.0976925
Determinant: -0.105149
Determinant: -0.0465851
Determinant: 0.00237453
Determinant: -0.0456927
Determinant: 0.0328236
Determinant: -0.0914691
Determinant: -0.0157904
Determinant: -0.00170804
Determinant: -0.014797
Determinant: 0.00464912
Determinant: -0.035118
Determinant: -0.0242306
Determinant: 0.0081405
Determinant: 0.0733502
Determinant: -0.0860252
Determinant: -0.0511219
Determinant: -0.0925647
Determinant: 0.0495087
Determinant: -0.0515914
Determinant: -0.044318
Determinant: 0.000900043
Determinant: 0.0632521
Determinant: 0.00957955
Determinant: 0.00598059
Determinant: 0.0179513
Determinant: 0.0952263
dx,dy,dz is a is the distance between tcp and an object i want to keep in sight. The sphere is like a safety zone, but is mainly used to compute the orientation of the tool.
|
I am working on a high speed autonomous robot (about 6-7 m/s), which does obstacle detection as well as senses traffic lights (I have used Raspberry Pi 3 and Arduino Uno).
For the steering mechanism, I wanted to implement an Ackerman's steering. I've read about the principle and have understood its basics. Now to actually make the design, I am currently using switchboards, sold here in India, they are surprisingly strong, lightweight, waterproof(they are switchboards) and cheap. Now I got 1 big axle and the small axle cut out already, along with the two L-shaped pieces that join the 2 axles together... I'm just now confused as to how to connect the wheels to the axle and how to make them rotate along side it. The site won't let me upload any pics right now, I'll try again ASAP.
I have the switchboard, an electric drill and will to do anything to make this happen ( ;P ). I don't have access to a 3D Printer.
Any help would be greatly appreciated...
P.S- And if you have any suggestions of your own, which might be better for my robot, feel free to share them, I'm just looking for a good steering method for my robot.
|
I'm using Stereolabs ZED camera for my computer vision project. I did a small research about several sensors on the market and ultimately we decided to go with the ZED Camera.
However I'm finding that the precision of the camera isn't that great. And the Point Cloud takes too much storage space. Anyone found the same problems? And if so, how did you managed them?
|
In modeling dynamics of a robot ,in which servo motor is adjusted inside the link, there is a need to find inertia tensor of the motor itself,Right?
So if it is needed how can i get the inertia tensor of motor since i couldn't find its solid works model having internal components,i mean gears and other stuff(with related specified materials)?
|
I got a new servo a few days back (RC Servo, Futaba FP-S148). I first tested it out with the Sweep sketch on Arduino, powering it with the Arduino 5v and GND pins only. It was working, just fine.
Today I was trying to use it in my robot and I tried powering it with 2 LiPo batteries (Samsung ICR16850 2200mAh, from an old laptop battery) connected in series, giving 8.32v. As soon as I connected my servo, it started rotating randomly, I had not connected it to my Arduino yet. I quickly took it out.
Next, I used a L7805 to get 5.13v regulated supply out of my batteries that I used earlier. When I connected my batteries to the servo, and the servo to the Arduino, uploaded the sketch, the servo started behaving rather strangely, it first did a complete turn and then stopped. Only a humming sound came from the servo. Strange thing is, whenever I connect one of my Multimeter leads to the power cables, the servo immediately turned in the opposite direction only as long as only lead was in contact with either the positive or negative wire.
Otherwise, the servo just gives a humming sound.
Have I fried my servo? Or is it some other issue?
UPDATE 1
I stripped down the servo and checked the motor. It is working fine, seems like this is a gear problem.
|
I'm looking for a testbed (simulator or web-based interface that lets me to have control on a robot) for testing different routing and navigation algorithms. Is there such a system on the web?
|
How to develop a robot based system to continuously monitor and check products for defeat which are moving on a conveyer belt using sensors and kick out the defect product from the queue?
|
I working on dynamic modeling and simulation of a mechanical system (overhead crane), after I obtained the equation of motion, in the form: $$ M(q)\ddot{q}+C(q,\dot{q})\dot{q}+G(q)=Q $$
All the matrices are know inertia, $ M(q)$, Coriolis-Centrifugal matrix $ C(q,\dot{q})$, and gravity $ G(q)$ as functions of the generalized coordinates $q$, and their derivatives $\dot{q}$.
I want to solve for $q$, using Matlab ODE (in m-file), I got the response for some initial conditions and zero input, but, I want to find the response, for the aforementioned control signal (A bang-bang signal of amplitude 1 N and 1 s width), I'm trying to regenerate some results from the literature, and what the authors of that work said, regrading the input signal is the following: "A bang-bang signal of amplitude 1 N and 1 s width is used as an input force, applied at the cart of the gantry crane. A bang-bang force has a positive (acceleration) and negative (deceleration) period allowing the cart to, initially, accelerate and then decelerate and eventually stop at a target location." I didn't grasp what do they mean by bang-bang signal, I know in Matlab we could have step input, impulse, ...etc. But bang-bang signal, I'm not familiar with. According to this site and this bang bang is a controller rather.
Could anyone suggest to me how to figure out this issue and implement this input signal? preferably in m-file.
The code I'm using is given bellows, two parts:
function xdot = AlFagera(t,x,spec)
% xdot = zeros(8,1);
xdot = zeros(12,1); % to include the input torque
% % Crane Specifications
mp = spec(1);
mc = spec(2);
mr = spec(3);
L = spec(4);
J = spec(5);
g = 9.80; % accelatrion of gravity (m/s^)
% % matix equations
M11 = mr+mc+mp; M12 = 0; M13 = mp*L*cos(x(3))*sin(x(4)); M14 = mp*L*sin(x(3))*cos(x(4));
M21 = 0; M22 = mp+mc; M23 = mp*L*cos(x(3))*cos(x(4)); M24 = -mp*L*sin(x(3))*sin(x(4));
M31 = M13; M32 = M23; M33 = mp*L^2+J; M34 = 0;
M41 = M14; M42 = M24; M43 = 0; M44 = mp*L^2*(sin(x(3)))^2+J;
M = [M11 M12 M13 M14; M21 M22 M23 M24; M31 M32 M33 M34; M41 M42 M43 M44];
C11 = 0; C12 = 0; C13 = -mp*L*sin(x(3))*sin(x(4))*x(7)+mp*L*cos(x(3))*cos(x(4))*x(8);
C14 = mp*L*cos(x(3))*cos(x(4))*x(7)-mp*L*sin(x(3))*sin(x(4))*x(8);
C21 = 0; C22 = 0; C23 = -mp*L*sin(x(3))*cos(x(4))*x(7)-mp*L*cos(x(3))*sin(x(4))*x(8);
C24 = -mp*L*cos(x(3))*sin(x(4))*x(7)-mp*L*sin(x(3))*cos(x(4))*x(8);
C31 = 0; C32 = 0; C33 = 0; C34 = -mp*L^2*sin(x(3))*cos(x(3))*x(8);
C41 = 0; C42 = 0; C43 = -C34; C44 = mp*L^2*sin(x(3))*cos(x(4))*x(7);
C = [C11 C12 C13 C14; C21 C22 C23 C24; C31 C32 C33 C34; C41 C42 C43 C44];
Cf = C*[x(5); x(6); x(7); x(8)];
G = [0; 0; mp*g*L*sin(x(3)); 0];
fx = 0;
if t >=1 && t<=2
fy = 1.*square(t*pi*2);
else fy = 0;
end
F =[fx; fy; 0; 0]; % input torque vector,
xdot(1:4,1)= x(5:8);
xdot(5:8,1)= M\(F-G-Cf);
xdot(9:12,1) = F;
And:
clear all; close all; clc;
t0 = 0;tf = 20;
x0 = [0.12 0.5 0 0, 0 0 0 0,0 0 0 0]; % initional conditions
% % spectifications
Mp = [0.1 0.5 1]; % variable mass for the payload
figure
plotStyle = {'b-','k','r'};
for i = 1:3
mp = Mp(i);
mc = 1.06; mr = 6.4; % each mass in kg
L = 0.7; J = 0.005; % m, kg-m^2 respe.
spec = [mp mc mr L J];
% % Call the the function
[t,x] = ode45(@(t,x)AlFagera(t,x,spec),[t0 :0.001: tf],x0);
legendInfo{i} = ['mp=',num2str(Mp(i)),'kg'];
fx = diff(x(:,9))./diff(t);
fy = diff(x(:,10))./diff(t);
tt=0:(t(end)/(length(fx)-1)):t(end); % this time vector
% to plot the cart positions in x and y direcitons
subplot(1,2,1)
plot(t,x(:,1),plotStyle{i})
axis([0 20 0 0.18]);
grid
xlabel('time (s)');
ylabel('cart position in x direction (m)');
hold on
legend(legendInfo,'Location','northeast')
subplot(1,2,2)
plot(t,x(:,2),plotStyle{i})
axis([0 20 0 1.1]);
grid
xlabel('time (s)');
ylabel('cart position in y direction (m)');
hold on
legend(legendInfo,'Location','northeast')
end
% to plot the input torque, (bagn-bang signal), just one sample
figure
plot(tt,fy)
grid
set(gca,'XTick',[0:20])
xlabel('time (s)');
ylabel('input signal, f_y (N)');
Furthermore, the results I'm getting and what I supposed to get are shown:
Major difficulties, initial conditions are not clearly stated in the paper, the input force direction, is only in y (which it should be), or it has different direction. I appreciate any help.
the paper I'm trying to recreate is:
R. M. T. Raja Ismail, M. A. Ahmad, M. S. Ramli, and F. R. M. Rashidi, “Nonlinear Dynamic Modelling and Analysis of a 3-D Overhead Gantry Crane System with System Parameters Variation.,” International Journal of Simulation–Systems, Science & Technology, vol. 11, no. 2, 2010.
http://ijssst.info/Vol-11/No-2/paper2.pdf
|
Consider this map
The Contest arena shown in figure 1 consists of two sub arenas, both the sides are identical to each other and their scientists and safe zone locations are similar.
Each sub arena has 3 different colored rooms and a fourth shared room. Each robot will be placed at identical start locations, respective to their arena. These locations will be random and anywhere on the map.
Each room (other than the shared room) will have two entry and exit gates. Both of these gates will be open at all times. The robot can enter and exit from any gate it chooses.
|
I have the mBot robot and I want to program it to follow the line. So far it can pass any kind of line that is >90°.
I want it to be able to pass 90°-ish angles as well. Like this one:
The problem is that my mBot robot has only 2 line following sensors (they are 5 mm apart and the line is 2 cm wide) so I can't use just the sensors.
Most of the times it just goes to the line and when it's supposed to turn it just misses the line (goes on the white) and goes back to get back on track. Once it's back on the black line it once again tries to go forward but goes on the white instead of taking a turn. This happens endlessly.
Sometimes it passes the angle by going back and forth and accidentally turning, but that's not even a workaround, let alone a solution.
Here's a test course of the first round of the competition.
My robot can pass this without a problem, but it gets stuck on this (poorly edited, sorry) course:
It can't pass the 20 block if the robot enters it from a 15 or 20 block (so basically it gets stuck if it's coming from an angle and hits a 90 degree turn).
The sensor value could be read as either 0, 1, 2 or 3 depending on what the robot currently sees:
0 - on the line
1 - on the right of the line
2 - on the left of the line
3 - not on the line
Pseudo code of my current program:
loop forever:
if (on the right of the line):
turn_left()
if (on the left of the line):
turn_right()
if (on the line):
go_forward()
if (not on the line):
go_backwards()
So how would I go about taking such sharp turns?
|
What is a difference between degrees of freedom (DOF) and degrees of motion (DOM)? I know that DOF is the number of independent movements a manipulation arm can make and robot system can have max 6 independent DOF and unlimited number of DOM but I do not distinguish them from each other.
|
I need to get my drone flying still enough that I can rest a glass of water on it.
I've tried a few KK boards and APM 2.6 (3.1 software). I've balanced props, set PID settings, auto-trim / autotune and the drone still tends to inconsistently drift a little one way or another.
What is a plausible way to completely isolate drift?
|
I have succeeded in making my first quadcopter from scratch with a readymade frame. I designed the flight controller myself with help from YMFC-3D youtube series of videos. https://www.youtube.com/watch?v=2pHdO8m6T7c
But in the process, I discovered that using the euler angles or the 'ypr' values from the MPU6050 as the feeback to the PID loop makes it super difficult to tune the quadcopter and even then it doesn't fly great.
Whereas although not intuitive to me, using the gyroscope values with a complementary filter instantly made the quad respond much better and the tuning also was not too difficult.
Let me clearly define the response in both cases.
Using ypr values:-
+Always keeps overshooting or 'underreaching'
+Very small range of values that can let the quad fly stable
+Drastic Reactions to extreme values of P (Kp)values
Using gyro values:-
+Reaction is much more stable
+Tuning the PID was also simple
+ Even under high values of P(Kp) the quad might crash due to oscillations but not flip or react extremely
Below is a portion of the PID loop:
//gyrox_temp is the current gyroscope output
gyro_x_input=(gyro_x_input*.8)+(gyrox_temp*0.2);//complementary filter
pidrate_error_temp =gyro_x_input - setpoint;//error value for PID loop
pidrate_i_mem_roll += pidrate_i_gain_roll * pidrate_error_temp;
//integral portion
pidrate_output_roll = pidrate_p_gain_roll * pidrate_error_temp + pidrate_i_mem_roll + pidrate_d_gain_roll * (pidrate_error_temp - pidrate_last_roll_d_error);
//output of the pid loop
/pidrate_p_gain_roll-Kp
//pidrate_i_gain_roll-Ki
//pidrate_d_gain_roll-Kd
//this output is given as the pwm signal to the quad plus throttle
|
I'm an Electronics student taking a module in Robotics.
From the example,
I understand line 1 as the Jacobian is found from the time derivative of the kinematics equation and such relates joint angles to velocity.
I do not understand why the transpose has been taken on line 3 and how line 4 is produced.
|
I want to measure the acceleration (forward and lateral separately) using an android smartphone device in order to be able to analyse the driving behavior.
My approach would be as follows:
1. Aligning coordinate systems
Calibration (no motion / first motion):
While the car is stationary, I would calculate the magnitude of gravity using Sensor.TYPE_GRAVITY and rotate it straight to the z-axis (pointing downwards assuming a flat surface). That way, the pitch and roll angles should be near zero and equal to the angles of the car relativ to the world.
After this, I would start moving straight forward with the car to get a first motion indication using Sensor.TYPE_ACCELEROMETER and rotate this magnitude straight to the x-axis (pointing forward). This way, the yaw angle should be equal to the vehicle's heading relativ to the world.
Update Orientation (while driving):
To be able to keep the coordinate systems aligned while driving I am going to use Sensor.TYPE_GRAVITY to maintain the roll and pitch of the system via
where A_x,y,z is the acceleration of gravity.
Usually, the yaw angle would be maintained via Sensor.ROTATION_VECTOR or Sensor.MAGNETIC_FIELD. However, the reason behind not using them is because I am going to use the application also in electrical vehicles. The high amounts of volts and ampere produced by the engine would presumably make the accuracy of those sensor values suffer. Hence, the best alternative that I know (although not optimal) is using the GPS course to maintain the yaw angle.
2. Getting measurements
By applying all aforementioned rotations it should be possible to maintain an alignment between the smartphone's and vehicle's coordinate systems and, hence, giving me the pure forward and lateral acceleration values on the x-axis and y-axis.
Questions:
Is this approach applicable or did I miss something crucial?
Is there an easier/alternative approach to this?
|
I want to install ROS on my Xubuntu 16.04, Xenial Xerus. I have followed the ROS's site instruction: http://wiki.ros.org/jade/Installation/Ubuntu, and did the following: First, setup my sources.list:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
Second, set up keys:
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 0xB01FA116
Then, make sure my package is up-to-date:
sudo apt-get update
Last, try to install ROS jade:
sudo apt-get install ros-jade-desktop-full
And get this error:
E: Unable to locate package ros-jade-desktop-full
Where did I go wrong, and how can I get ROS (any version is ok) running on my Xubuntu 16.04?
|
I'm about to start a project, where I'm sniffing data between remote controls and flight controllers on RC copters and doing stuff with that information. Do all (or most) flight controllers use the same protocol to communicate with the remote controls, or does it vary based on which one you buy? I would be testing on drones (DJI phantom and the like).
So, my real question is:
If I want to write something to read the data, will I need to buy a different flight controller for each protocol used, or do they all use the same protocol, and I can just buy one flight controller, and the info I can get out will be the same for all types of flight controllers?
Also, are the protocols only spoken by the ground remote control and the flight controller? Does the receiver care what protocol is being used, or is it just a middle man?
|
Good day,
I have been reading papers about position integration from accelerometer readings.
I have consulted this paper from freescale on how that is achievable and this article regarding leaky integrators to help in preventing accumulation of errors from integration.
I was testing this algorithm by moving the imu by approximately 0.1 meter. The algorithm does get it right at the instant it arrives at approx 0.1 meter however when left still at that position, the integrated position goes to zero.
It turns out the velocity readings become negative at a certain period after reaching 0.1 meters.
Does anyone have any suggestions in dealing with this error?
Plots (Red is the position, Blue is the velocity.)
The imu(accelerometer) was moved alternating positions 0 meters and 0.1 meters with a stop of approximately 3-5 seconds in between before moving to the next position
Actual Data
Desired Data output (Green - Desired position integration)
Code:
// Get acceleration per axis
float AccX = accelmagAngleArray.AccX;
float AccY = accelmagAngleArray.AccY;
float AccZ = accelmagAngleArray.AccZ;
AccX -= dc_offsetX;
AccY -= dc_offsetY;
AccZ -= dc_offsetZ;
//Calculate Current Velocity (m/s)
float leakRateAcc = 0.99000;
velCurrX = velCurrX*leakRateAcc + ( prevAccX + (AccX-prevAccX)/2 ) * deltaTime2;
velCurrY = velCurrY*leakRateAcc + ( prevAccY + (AccY-prevAccY)/2 ) * deltaTime2;
velCurrZ = velCurrZ*0.99000 + ( prevAccZ + (AccZ-prevAccZ)/2 ) * deltaTime2;
prevAccX = AccX;
prevAccY = AccY;
prevAccZ = AccZ;
//Discrimination window for Acceleration
if ((0.12 > AccX) && (AccX > -0.12)){
AccX = 0;
}
if ((0.12 > AccY) && (AccY > -0.12)){
AccY = 0;
}
//Count number of times acceleration is equal to zero to drive velocity to zero when acceleration is "zero"
//X-axis---------------
if (AccX == 0){ //Increment no of times AccX is = to 0
counterAccX++;
}
else{ //Reset counter
counterAccX = 0;
}
if (counterAccX>25){ //Drive Velocity to Zero
velCurrX = 0;
prevVelX = 0;
counterAccX = 0;
}
//Y-axis--------------
if (AccY == 0){ //Increment no of times AccY is = to 0
counterAccY++;
}
else{ //Reset counter
counterAccY = 0;
}
if (counterAccY>25){ //Drive Velocity to Zero
velCurrY = 0;
prevVelY = 0;
counterAccY = 0;
}
//Print Acceleration and Velocity
cout << " AccX = " << AccX ;// << endl;
cout << " AccY = " << AccY ;// << endl;
cout << " AccZ = " << AccZ << endl;
cout << " velCurrX = " << velCurrX ;// << endl;
cout << " velCurrY = " << velCurrY ;// << endl;
cout << " velCurrZ = " << velCurrZ << endl;
//Calculate Current Position in Meters
float leakRateVel = 0.99000;
posCurrX = posCurrX*leakRateVel + ( prevVelX + (velCurrX-prevVelX)/2 ) * deltaTime2;
posCurrY = posCurrY*leakRateVel + ( prevVelY + (velCurrY-prevVelY)/2 ) * deltaTime2;
posCurrZ = posCurrZ*0.99000 + ( prevVelZ + (velCurrZ-prevVelZ)/2 ) * deltaTime2;
prevVelX = velCurrX;
prevVelY = velCurrY;
prevVelZ = velCurrZ;
//Print X and Y position in meters
cout << " posCurrX = " << posCurrX ;// << endl;
cout << " posCurrY = " << posCurrY ;// << endl;
cout << " posCurrZ = " << posCurrZ << endl;
|
I get position information and a corresponding timestamp from a motion tracking system (for a rigid body) at 120 Hz. The position is in sub-millimeter precision, but I'm not too sure about the time stamp, I can get it as floating point number in seconds from the motion tracking software. To get the velocity, I use the difference between two samples divided by the $\Delta t$ of the two samples:
$\dot{\mathbf{x}} = \dfrac{\mathbf{x}[k] - \mathbf{x}[k-1]}{t[k]-t[k-1]}$.
The result looks fine, but a bit noisy at times. A realized that I get much smoother results when I choose the differentiation step $h$ larger, e.g. $h=10$:
$\dot{\mathbf{x}} = \dfrac{\mathbf{x}[k] - \mathbf{x}[k-h]}{t[k]-t[k-h]}$.
On the other hand, peaks in the velocity signal begin to fade if I choose $h$ too large. Unfortunately, I didn't figure out why I get a smoother signal with a bigger step $h$. Does someone have a hint? Is there a general rule which differentiation step size is optimal with respect to smoothness vs. "accuracy"?
This is a sample plot of one velocity component (blue: step size 1, red: step size 10):
|
I am working in reproducing a robotics paper, first simulating it in MATLAB in order to implement it to a real robot afterwards. The robot's model is:
$$\dot{x}=V(t)cos\theta $$
$$\dot{y}=V(t)sin\theta$$
$$\dot{\theta}=u$$
The idea is to apply an algorithm to avoid obstacles and reach a determines target. This algorithm uses a cone vision to measure the obstacle's properties. The information required to apply this system is:
1) The minimum distance $ d(t) $ between the robot and the obstacle (this obstacle is modelled as a circle of know radius $ R $).
2) The obstacle's speed $ v_{obs}(t) $
3)The angles $ \alpha_{1}(t)$ and $ \alpha_{2}(t)$ that form the robot's cone vision, and
4) the heading $ H(t) $ from the robot to the target
First a safe distance $ d_{safe}$ between the robot and the obstacle is defined. The robot has to reach the target without being closer than $ d_{safe}$ to the obstacle.
An extended angle $ \alpha_{0} \ge arccos\left(\frac{R}{R+d_{safe}} \right) $ is defined, where $ 0 \le \alpha_{0} \le \pi $
Then the following auxiliary angles are calculated:
$ \beta_{1}(t)=\alpha_{1}(t)-\alpha_{0}(t)$
$ \beta_{2}=\alpha_{2}(t)+\alpha_{0}(t)$
Then the following vectors are defined:
$ l_{1}=(V_{max}-V)[cos(\beta_{1}(t)),sin(\beta_{1}(t))]$
$ l_{2}=(V_{max}-V)[cos(\beta_{2}(t)),sin(\beta_{1}(2))]$
here $ V_{max}$ is the maximum robot's speed and $ V $ a constant that fulfills $ \|v_{obs}(t)\| \le V \le V_{max} $
This vectors represent the boundaries of the cone vision of the vehicle
Given the vectors $ l_{1} $ and $ l_{2}$ , the angle $ \alpha(l_1,l_2)$ is the angle between $ l_{1}$ and $ l_{2} $ measured in counterclockwise direction, with $ \alpha \in (-\pi,\pi) $ . Then the function $f$ is
The evasion maneuver starts at time $t_0$. For that the robot find the index h:
$h = min|\alpha(v_{obs}(t_0)+l_j(t_0),v_R(t_0))|$
where $j={1,2}$ and $v_R(t)$ is the robot's velocity vector
Then, from the two vectors $v_{obs}(t_0)+l_j(t_0)$ we choose that one that forms the smallest angle with the robot's velocity vector. Once h is determinded, the control law is applied:
$u(t)=-U_{max}f(v_{obs}(t)+l_h(t),v_R(t))$
$V(t)=\|v_{obs}(t)+l_h(t)\| \quad \quad (1)$
This is a sliding mode type control law, that steers the robot's velocity $v_R(t)$ towards a switching surface equal to the vector $v_{obs}(t)+l_h(t)$. Ideally the robot avoids the obstacle by surrounding it a
While the robot is not avoiding an obstacle it follows a control law:
$u(t)=0$
$V(t)=V_{max} \quad \quad (2) $
Hence the rules to switch between the two laws are:
R10 Switching from (2) to (1) occurs whenthe distance to the obstacle is equal to a constant C, which means when $d(t_0)=C$ and this distance is becoming smaller in time i.e. $\dot{d(t)}<0$
R11 Switching from (1) to (2) occurs when $d(t_*)<1.1a_*$ and the vehicle is pointing towards the obstacle, i.e. $\theta(t_*)=H(T_*)$
where $a_*=\frac{R}{cos\alpha_0}-R $
Ideally the result should be similar to this
But I'm getting this instead
While I understand the theory there's obviously a flaw in my implementation that I haven't been able to solve. In my opinion the robot manages to avoid the obstacle but at certain point (in the red circle), the robot turns to the wrong side, making impossible the condition $H(t) = \theta(t) $ to be achieved.
I feel that I am not measuring properly the angle alpha between the $v_{obs}(t)+l_h(t)$ and $v_{R}(t)$ , because while debugging I can see that at certain point it stops switching between negative and positive values and become only positive, leading the robot's to the wrong side. It also seems to be related with my problem here: Angle to a circle tangent line
|
I at moment trying to compute the Q configuration that moves my robot from it current state described by this transformation matrix.
with rotation
0.00549713 0.842013 -0.539429
0.999983 -0.00362229 0.00453632
0.00186567 -0.539445 -0.842019
and position as this:
-0.0882761
-0.255069
0.183645
To this rotatation
0 0.942755 -0.333487
1 0 0
0 -0.333487 -0.942755
and this position
8.66654
19.809
115.771
Due to the drastic change in the Z direction, I thought i could split the path between start and end into small chunks by creating data inbetween by interpolating, and compute the inverse kinematics for each of these small position. Problem is that the output i am getting is pretty large.. Which making me suspect that some of the output might be wrong. The simulation i am using constrains the rotation to 360 degrees.. I think something goes wrong..
The only reason I could think would do this, would be if the jacobian i am was using had singularities... Which why i assumed that i was running into singualarity issue..
setQ{22.395444, 319.402231, 90.548314, -228.989836, -295.921218, -336.808799}
setQ{8.209388, 362.472468, 108.618073, -232.346755, -299.935844, -334.929518}
setQ{8.479842, 399.892521, 127.432982, -234.017882, -303.852583, -335.063821}
setQ{8.224516, 362.232497, 108.666778, -232.319554, -299.899932, -334.928688}
setQ{7.718908, 286.832458, 71.150606, -228.913831, -291.982659, -334.658147}
setQ{7.468625, 249.092444, 52.400638, -227.206436, -288.018036, -334.522738}
setQ{7.220023, 211.325766, 33.656081, -225.496018, -284.049424, -334.387237}
setQ{-6.134091, -2538.260148, -1283.375216, -96.331289, 7.920957, -324.531125}
setQ{-6.261661, -2577.946595, -1301.730132, -94.403263, 12.176863, -324.388990}
setQ{-6.634286, -2697.165915, -1356.762411, -88.601053, 24.968521, -323.962029}
setQ{-6.991781, -2816.625206, -1411.745985, -82.771641, 37.796090, -323.534239}
setQ{-7.334148, -2936.324468, -1466.680853, -76.915029, 50.659572, -323.105620}
setQ{-7.661386, -3056.263702, -1521.567017, -71.031215, 63.558965, -322.676171}
setQ{-8.642914, -3457.794271, -1704.169136, -51.222052, 106.816303, -321.238686}
setQ{-8.988457, -3619.153075, -1777.058457, -43.213761, 124.230964, -320.661112}
setQ{-9.382564, -3821.451508, -1868.048346, -33.135395, 146.089069, -319.937071}
setQ{-9.528439, -3902.557525, -1904.406419, -29.082892, 154.860242, -319.646810}
setQ{-9.667591, -3983.770196, -1940.742846, -25.018300, 163.647376, -319.356179}
setQ{-9.734645, -4024.416527, -1958.902942, -22.981471, 168.046928, -319.210726}
setQ{-9.986053, -4187.268484, -2031.489209, -14.803929, 185.685040, -318.627992}
setQ{-10.210564, -4350.547057, -2103.988889, -6.578030, 203.386994, -318.043783}
setQ{-10.312734, -4432.346324, -2140.206259, -2.446947, 212.261912, -317.751125}
setQ{-10.453381, -4555.245201, -2194.491727, 3.772345, 225.604215, -317.311448}
setQ{-10.496902, -4596.264820, -2212.576060, 5.851488, 230.059630, -317.164705}
setQ{-10.538741, -4637.311102, -2230.654980, 7.933652, 234.519035, -317.017869}
setQ{-10.617377, -4719.483658, -2266.796587, 12.107048, 243.449816, -316.723922}
setQ{-10.812941, -4966.641247, -2375.091527, 24.699772, 270.337923, -315.839868}
setQ{-10.839651, -5007.927501, -2393.121742, 26.809138, 274.833240, -315.692203}
setQ{-10.888029, -5090.579998, -2429.165939, 31.036936, 283.835844, -315.396596}
setQ is just a function for my simulation, the numbers are the actual Q values starting from 0 - 5. (I am using a 6 jointed robot (UR5))
Update
I am using a sphere to compute my desired transformation matrix.. The idea is that i want my arm be on this sphere, point inward to the center.
std::vector<Transform3D<>> pathPlanning::sphere(double dx, double dy, double dz)
{
double r = 5.0; // Radius of the sphere - set to 5.0 cm (TODO: has to be checked if that also is accurate)
cout << "Create a sphere" << endl;
double current_x = this->device->baseTframe(this->toolFrame,this->state).P()[0];
double current_y = this->device->baseTframe(this->toolFrame,this->state).P()[1];
double current_z = this->device->baseTframe(this->toolFrame,this->state).P()[2];
// Formula for sphere (x-x0)²+(y-y0)²+(z-z0)²=r²
// x: x = x_0 + rcos(theta)sin(phi)
// y: y = y_0 + rsin(theta)sin(phi)
// z: z = z_0 + rcos(phi)
// Angle range: 0 <= theta <= 2M_PI ; 0 <= phi <= M_PI
double obj_x = current_x + dx;
double obj_y = current_y + dy;
double obj_z = current_z + dz;
std::vector<Transform3D<>> out;
int count = 32;
for(double azimuthal = 0; azimuthal <= M_PI ; azimuthal+=0.01 )
{
for(double polar = 0.35; polar <= M_PI-0.35 ; polar+=0.01 )
{
double sphere_x = obj_x + r*cos(azimuthal)*sin(polar);
double sphere_y = obj_y + r*sin(azimuthal)*sin(polar);
double sphere_z = obj_z + + r*cos(polar);
//string text = to_string(sphere_x) + " , " + to_string(sphere_y)+ " , " + to_string(sphere_z);
//positions << text << endl;
Transform3D<> transformation_matrix = transform(obj_x,obj_y,obj_z,sphere_x,sphere_y,sphere_z);
if(0.1<(transformation_matrix.P()[0] - current_x) || 0.1<(transformation_matrix.P()[1] - current_y) || 0.1<(transformation_matrix.P()[2] - current_z))
{
cout << "Interpolate: " << endl;
std::vector<Transform3D<>> transformation_i = invKin_LargeDisplacement(transformation_matrix);
out.insert(out.end(),transformation_i.begin(),transformation_i.end());
cout << out.size() << endl;
cout << "only returning one interpolation onto the sphere!" << endl;
return transformation_i;
}
else
{
cout << "OK" << endl;
out.push_back(transformation_matrix);
}
if(count == 32) //TODO: Why...... is this occuring?
{
//cout << "Theta: " << theta << " Phi: " << phi << endl;
//cout << sphere_x << " , " << sphere_y <<" , "<< sphere_z << endl;
count = 0;
}
else
{
count++;
}
}
}
return out;
}
This function provides me with the point on the sphere, which is use to create my rotation matrix using transform.
Transform3D<> pathPlanning::transform(double obj_x, double obj_y, double obj_z, double sphere_x, double sphere_y ,double sphere_z)
{
// Z-axis should be oriented towards the object.
// Rot consist of 3 direction vector [x,y,z] which describes how the axis should be oriented in the world space.
// Looking at the simulation the z-axis is the camera out. X, and Y describes the orientation of the camera.
// The vector are only for direction purposes, so they have to be normalized....
// TODO: case [0 0 -1]... Why is it happening at what can be done to undo it?
cout << "inside Transform" << endl;
cout << obj_x << "," << sphere_x << " ; " << obj_y << " , " << sphere_y <<" ; "<< obj_z << " , " << sphere_z << endl;
Vector3D<> dir_z((obj_x - sphere_x), (obj_y - sphere_y), (obj_z - sphere_z));
//Vector3D<> dir_z((sphere_x-obj_x), (sphere_y - obj_y), (sphere_z-obj_z));
dir_z = normalize(dir_z);
Vector3D<> downPlane(0.0,0.0,-1.0);
Vector3D<> dir_x = cross(downPlane,dir_z);
dir_x = normalize(dir_x);
Vector3D<> dir_y = cross(dir_z,dir_x);
dir_y = normalize(dir_y);
Rotation3D<> rot_out (dir_x,dir_y,dir_z); // [x y z]
Vector3D<> pos_out(sphere_x,sphere_y,sphere_z);
Transform3D<> out(pos_out,rot_out);
cout << "desired: " << out << endl;
return out;
}
The transform basically computes the rotation matrix. The math is based on the on this post by @Ben, which is an answer to a similar problem i am having..
Update
Error with the rotation matrix was due to the polar coordinate being 0 => sin(0) = 0.
I made this plot displaying the determinant of the jacobian, while i compute the inverse kinematics for the large displacement. For each inverse kinematics iteration, I set the robot to the new q_i and use that as current and continue computing until i reach the end configuration.
It seems that alot of them goes toward a singularity or in general a pretty low number..
Update
Again i think the singularities might be the culprit here..
determinant: 0.0424284
Q{13.0099, -46.6613, -18.9411, 2.38865, 5.39454, -4.53456}
determinant: -0.0150253
Q{47.1089, -0.790356, 6.89939, -2.725, -1.66168, 11.2271}
determinant: -0.0368926
Q{15.7475, 8.89658, 7.78122, -2.74134, -5.32446, 1.11023}
determinant: -0.0596228
Q{180.884, 66.3786, 17.5729, 9.21228, -14.9721, -12.9577}
determinant: -0.000910399
Q{5426.74, 5568.04, -524.078, 283.581, -316.499, -67.3459}
determinant: -0.0897656
Q{16.6649, -37.4239, -34.0747, -16.5337, -3.95636, -7.31064}
determinant: -0.00719097
Q{-1377.14, 167.281, -125.883, -10.4689, 179.78, 56.3877}
determinant: 0.0432689
Q{22.2983, -10.1491, -15.0894, -4.41318, -2.07675, -3.48763}
determinant: -0.0430843
Q{82.6984, -39.02, -24.5518, 13.6317, 4.17851, -14.0956}
determinant: -0.0137243
Q{425.189, -9.65443, 20.9752, 7.63067, 25.4944, -52.4964}
Everytime i compute a new Q I set the robot in that state, and perform inverse kinematics from that state.. Q is the joint angles for the 6 joints.
Update
Interpolation is done by lineary dividing the path from start to end into a specified amount of of data points.
This plot shows each tranformation matrices generated from the interpolation and with their the position part plotted. The red dots is the path (every 1000th position). The blue ball is the object in want to track, and green dots represents the sphere.. As I am only doing this for the first point on the sphere, it only hits one point on the sphere, which is the top point, which the plot also shows.
Rotation doesn't show that much change, which also makes sense based difference between the current and desired rotations.
Update
My InvKin Implementation for LargeDisplacements:
std::vector<Q> pathPlanning::invKin_largeDisplacement(std::vector<Transform3D<>> t_tool_base_desired_i)
{
Device::Ptr device_backup = this->device; //Read in device parameter
WorkCell::Ptr workcell_backup = this->workcell; //Read in workcell parameter
State state_backup = this->state;
std::vector<Q> output;
for(unsigned int i = 0; i<t_tool_base_desired_i.size(); ++i)
{
Transform3D<> T_tool_base_current_i = device_backup->baseTframe(this->toolFrame,state_backup); //Read in Current transformation matrix
Eigen::MatrixXd jq(device_backup->baseJframe(this->toolFrame,state_backup).e().cols(), this->device.get()->baseJframe(this->toolFrame,state_backup).e().rows());
jq = this->device.get()->baseJframe(this->toolFrame,state_backup).e(); // Get the jacobian for current_configuration
//Least square solver - dq = [j(q)]T (j(q)[j(q)]T)⁻1 du <=> dq = A*du
Eigen::MatrixXd A (6,6);
//A = jq.transpose()*(jq*jq.transpose()).inverse();
A = (jq*jq.transpose()).inverse()*jq.transpose();
Vector3D<> dif_p = t_tool_base_desired_i[i].P()-T_tool_base_current_i.P(); //Difference in position
Eigen::Matrix3d dif = t_tool_base_desired_i[i].R().e()- T_tool_base_current_i.R().e(); //Differene in rotation
Rotation3D<> dif_r(dif); //Making a rotation matrix the the difference of rotation
RPY<> dif_rot(dif_r); //RPY of the rotation matrix.
Eigen::VectorXd du(6); //Creating du
du(0) = dif_p[0];
du(1) = dif_p[1];
du(2) = dif_p[2];
du(3) = dif_rot[0];
du(4) = dif_rot[1];
du(5) = dif_rot[2];
Eigen::VectorXd q(6);
q = A*du; // computing dq
Q q_current;
q_current = this->device->getQ(this->state);
Q dq(q);
Q q_new = q_current+ dq; // computing the new Q angles
output.push_back(q_new); store it in the output vector
device_backup->setQ(q_new,state_backup); //Set the robot to the calculated state.
}
return output;
}
I am pretty sure that my interpolation works, as the plot shows. My inverse kinematics on the other hand not so sure..
Update
@Chuck mentions in his answer that it would be a good idea to check the core functionality, which might shed some light on what could be going wrong.
I tried it with an inv.kin function i know would work, which didn't return any result, which make me doubt whether my transformation function i create is accurate?
The robot simulation is the one shown above.. The Transform function shown above, is the function which i use to compute my desired, and provide my inverse kinematics.. Is something incorrectly setup?
Update
@Chuck came up with an different approach to my problem, which only has 3 DOF, being the position. I choose change track, and peform a simple inverse kinematics given a distance dx,dy,dz.. Which for some reason isn't working quite good for me? even for small differences...
Here is my code:
std::vector<Q>pathPlanning::invKin(double dx, double dy , double dz)
{
kinematics::State state = this->state;
Transform3D<> t_tool_base_current = this->device.get()->baseTframe(this->toolFrame,state);
cout <<"Current: "<< t_tool_base_current.P().e()<< endl;
Vector3D<> P_desired(0.000001+t_tool_base_current.P().e()[0],t_tool_base_current.P().e()[1],t_tool_base_current.P().e()[2]);
cout <<"Desired: " <<P_desired << endl;
Transform3D<> t_tool_base_desired(P_desired,t_tool_base_current.R());
Eigen::MatrixXd jq(this->device.get()->baseJframe(this->toolFrame,state).e().cols(), this->device.get()->baseJframe(this->toolFrame,state).e().rows());
jq = this->device.get()->baseJframe(this->toolFrame,state).e();
//Least square solver - dq = [j(q)]T (j(q)[j(q)]T)⁻1 du <=> dq = A*du
Eigen::MatrixXd A (6,6);
//A = jq.transpose()*(jq*jq.transpose()).inverse();
A = (jq*jq.transpose()).inverse()*jq.transpose();
Vector3D<> dif_p = t_tool_base_desired.P()-t_tool_base_current.P();
cout <<"difference: " <<dif_p << endl;
Eigen::VectorXd du(6);
du(0) = dif_p[0];
du(1) = dif_p[1];
du(2) = dif_p[2];
du(3) = 0;
du(4) = 0;
du(5) = 0;
Eigen::VectorXd q(6);
q = A*du;
Q q_current;
q_current = this->device->getQ(this->state);
Q dq(q);
Q q_new = q_current+ dq;
std::vector<rw::math::Q> output;
if(!collision(q_new))
{
output.push_back(q_new);
}
else
{
cout << endl;
cout << q_new << endl;
}
return output;
}
which outputs this
Current: -0.000799058
-0.282
0.99963
Desired: Vector3D(-0.000789058, -0.282, 0.99963)
difference: Vector3D(1e-05, 0, 0)
setQ{1.559142, 110474925659325248.000000, -1834.776226, 55426871347211368.000000, 0.068436, 88275880260745.328125}
setQ is the state which moves the robot to the desires state..
Either is something wrong with my implementation, or it is a singularity..
Especially because i am not moving it that much (0.00001)!!!
Updates
I think I have solved the mystery.. It must be the sphere function which creates points that outside the reach of the robot.!!
|
I am at the moment trying to implement an inverse kinematics function which function is to take a desired transformation matrix, and the current transformation matrix, and compute the Q states that is needed to move my robot arm from current state to end state.
I have already written the code, but since my simulation isn't showing the right path, or what I would expect it to be, this makes me unsure as to whether my implementation is correct. Could someone comment on my implementation and maybe spot an error?
std::vector<Q> pathPlanning::invKin_largeDisplacement(std::vector<Transform3D<>> t_tool_base_desired_i)
{
for(unsigned int i = 0; i<t_tool_base_desired_i.size(); ++i)
{
Transform3D<> T_tool_base_current_i = device_backup->baseTframe(this->toolFrame,state_backup);
Eigen::MatrixXd jq(device_backup->baseJframe(this->toolFrame,state_backup).e().cols(), this->device.get()->baseJframe(this->toolFrame,state_backup).e().rows());
jq = this->device.get()->baseJframe(this->toolFrame,state_backup).e();
//Least square solver - dq = [j(q)]T (j(q)[j(q)]T)⁻1 du <=> dq = A*du
Eigen::MatrixXd A (6,6);
//A = jq.transpose()*(jq*jq.transpose()).inverse();
A = (jq*jq.transpose()).inverse()*jq.transpose();
Vector3D<> dif_p = t_tool_base_desired_i[i].P()-T_tool_base_current_i.P(); // Difference in position between current_i and desired_i
Eigen::Matrix3d dif = t_tool_base_desired_i[i].R().e()- T_tool_base_current_i.R().e(); // Difference in rotation between current_i and desired_i
Rotation3D<> dif_r(dif); //Construct rotation matrix
RPY<> dif_rot(dif_r); // compute RPY from rotation matrix
//Jq*dq = du
Eigen::VectorXd du(6);
du(0) = dif_p[0];
du(1) = dif_p[1];
du(2) = dif_p[2];
du(3) = dif_rot[0];
du(4) = dif_rot[1];
du(5) = dif_rot[2];
Eigen::VectorXd q(6);
q = A*du; // Compute change dq
Q q_current;
q_current = this->device->getQ(this->state); // Get Current Q
Q dq(q);
Q q_new = q_current+ dq; // compute new Q by adding dq
output.push_back(q_new); // Pushback to output vector
device_backup->setQ(q_new,state_backup); //set current state to newly calculated Q.
}
return output;
}
Example of output:
Q{-1.994910, -94.421754, -123.448429, 15.218864, 6.602184, -13.742988}
Q{2627.867315, -2048.863588, -51.340574, 287.654959, 270.187026, 258.581800}
Q{12941.812459, -536.870516, -294.362593, -2145.963577, -31133.660814, -4742.343433}
Q{32.044799, -14.220020, -14.312226, -12.444921, 12.269179, -24.393637}
Q{125.537278, 28.626924, -55.646716, -20.945348, 17.536762, -2.656717}
Q{9.514525, -107.455064, -17.009190, -15.245588, -0.960273, -2.010570}
Q{8.255582, -3.010934, -4.882207, -1.369533, 0.848644, 1.175172}
Q{208.655993, -28.443465, -64.413952, -3.129896, 13.063806, -6.042187}
Q{-73.706483, -20.381540, -5.306434, -1.204419, -4.035149, 21.806934}
Q{10.003481, 10.867394, 13.256192, -6.491445, -1.711469, 2.896646}
Q{24.890626, -72.265307, -94.886507, 12.327304, -4.425786, 4.188531}
Q{7.111258, 31.500732, -0.111033, -20.434697, 5.302118, 1.781690}
Q{477.993581, 659.221820, 19.819916, -88.627757, 65.850191, -77.267367}
Q{-30.672145, -53.496243, -18.170871, 83.648574, 48.311796, -28.015005}
Q{-36.677982, -15.908633, 17.751008, 0.995766, -0.500259, 9.409435}
Q{114246.358249, -10664.813432, -75.904830, 462.907904, 7992.514723, -18484.319327}
Q{83.827086, -75.899321, -38.576446, 37.266068, 47.843725, 39.096061}
Q{-119.682661, -774.773093, -251.969174, 23.212110, -42.662580, 53.247454}
Q{98.608881, -28.013383, 132.896921, 17.121488, 36.916894, -14.627180}
Q{-11519.051453, 5761.564318, -364.916044, -1188.567128, -2582.813750, -462.784007}
Q{54802.605226, 40971.776641, 10204.739981, -654.963987, -244.277958, -8618.970216}
Q{-21.334047, -14.314134, 17.714174, 2.463993, 0.963385, 5.304530}
|
I need to caculate the pose of a camera using an image of an artificial landmkark. For this porpouse I am trying to use the Perspective n Point approach so I can calculate it using the intrinsic camera matrix, the world coordinates of the landmark (I am using 4 points) and its projection in the image.
There are some algorithms to solve this (PnP, EPnP, RPnP, etc) and I am trying to use the RPnP. I have found an implementation of this here:
http://xuchi.weebly.com/rpnp.html
I used this code but I am having some problems because I can't obtain the correct pose.
I am using the P.Corke's Robotics Toolbox for MATLAB to create a CentraCamera with a known pose and calculating the projection of the landmark in this camera, but the rotation and translation that the RPnP returns me is not the same as I defined before.
Anyone has used this RPnP algorithm to solve that kind of problems?
|
Question: A PID controller has three parameters Kp, Ki and Kd which could affect the output performance. A differential driving robot is controlled by a PID controller. The heading information is sensed by a compass sensor. The moving forward speed is kept constant. The PID controller is able to control the heading information to follow a given direction. Explain the outcome on the differential driving robot performance when the three parameters are increased individually.
This is a question that has come up in a past paper but most likely won't show up this year but it still worries me. It's the only question that has me thinking for quite some time.
I'd love an answer in simple terms. Most stuff i've read on the internet don't make much sense to me as it goes heavy into the detail and off topic for my case.
My take on this:
I know that the proportional term, Kp, is entirely based on the error and that, let's say, double the error would mean doubling Kp (applying proportional force). This therefore implies that increasing Kp is a result of the robot heading in the wrong direction so Kp is increased to ensure the robot goes on the right direction or at least tries to reduce the error as time passes so an increase in Kp would affect the robot in such a way to adjust the heading of the robot so it stays on the right path.
The derivative term, Kd, is based on the rate of change of the error so an increase in Kd implies that the rate of change of error has increased over time so double the error would result in double the force. An increase by double the change in the robot's heading would take place if the robot's heading is doubled in error from the previous feedback result. Kd causes the robot to react faster as the error increases.
An increase in the integral term, Ki, means that the error is increased over time. The integral accounts for the sum of error over time. Even a small increase in the error would increase the integral so the robot would have to head in the right direction for an equal amount of time for the integral to balance to zero.
I would appreciate a much better answer and it would be great to be confident for a similar upcoming question in the finals.
|
What is the wheel base distance that should be used for the create2 to calculate angle? I have seen 230.8mm in code samples but the manual seems to indicate 235.0 mm.
|
I am working on a quadcopter project based on Arduino board, my system is powered by a 4S LiPo battery (14.8V) but the motors behave differently as the battery voltage drops, when discharging.
Is there any way that I can make the motors behave the same until a minimum value of, say, 5 volts?
My current system works fine at the range from 14.8 to 10 volts, but below that I can't even hover.
|
Lets say that I needed to send sensor readings in increments of 100 bytes from a micro controller to a laptop with sub 2 ms latencies in real time (the data needs to be processed and acted upon immediately (to control a robot)). What interfaces would one use?
FTDI usb-serial converters aren't an option because they introduce 5-10 ms latencies both ways. PCI cards are an option though.
|
Is there something like an all-in-one satellite based localization solution that would contain both hardware and software to do GNSS localization for robotics? I mean a package that would also contain an IMU, would fuse it with GPS and filter the result accordingly and then provide a software API to query for location/speed etc.
I am interested rather in some affordable solution but is there some professional hardware too?
I am trying to implement this for my mobile robot and I realize that a smartphone-grade GPS (Samsung J5) gives me better preliminary results than an u-blox eval board (this NEO-M8T with integrated antenna and ground plane) - I wonder why, I guess Android may fuse the IMU and have better readings even with worse antenna?
|
I'm working on a robot that is controlled by an xbox controller connected to a windows computer and commands are sent to a pcduino through a tcp connection. I have it working by sending a string of 1's and 0's to tell the pcduino which motors to turn on. I'm trying to optimize it by just sending an int and using bit masks to make the decisions on the pcduino but I can't get the pcduino to receive the int correctly. I tested the windows function sending the command with sokit and its sending the correct values but the pcduino is receiving the same number even when the commands are changing.
This is what its doing:
Windows -> PCDuino
command = 1 -> sendBuff = 73932
cmdstring = 1 -> n = 1
command = 1025 -> sendBuff = 73932
cmdstring = 1025 -> n = 4
My windows functions are:
bool Client::Send(char * smsg)
{
int iResult = send(ConnectSocket, smsg, strlen(smsg), 0);
if (iResult == SOCKET_ERROR)
{
std::cout << "Sending Message has failed: " << WSAGetLastError() << "\n";
Stop();
return false;
}
return true;
}
bool sendCommand()
{
cmdbuffer << command;
cmdstring = cmdbuffer.str();
if (!client->Send((char *)cmdstring.c_str()))
{
std::cout << "Disconnected from Server. Press Enter to Exit";
std::cin.ignore();
std::cin.get();
return false;
}
return true;
}
PCDuino Loop Function
void loop()
{
recBuff = 0;
deviceFlag = 0;
//Read Socket
/******************************************************************************/
read(connfd, sendBuff, strlen(sendBuff));
recBuff = atoi(sendBuff);
/******************************************************************************/
//Set Current Device to Receive Instructions From
checkAuto(recBuff, currDevice);
//Find Current Device of Command
deviceFlag = checkDevice(recBuff);
//If Current Device and Set Device are the Same Parse Command
if (deviceFlag == currDevice)
{
parseHex(recBuff);
}
usleep(50000);
}
I have a printf after the read call and that's where I am getting the 73932 number. I think I have everything you guys need but if there's anything else I need to add let me know. I'm stumped...I don't know if its just a casting problem or what.
Update 1
What I have before everything the setup and loop functions on the PCduino run is:
int listenfd = 0, connfd = 0;
int n;
struct sockaddr_in serv_addr;
char sendBuff[1025];
time_t ticks;
|
I've built a quadcopter using the Dji Wookong-M. As of a couple of weeks ago I have been able to get everything to work except for one small thing. When I throttle up the Drone tends to flip to the side. I have tested all the motors over and over again and I know that they are spinning the right direction and that I have the right props on the right motors. I tested on both grass and concrete but both times it flipped. It starts to flip once the throttle is past 50%. I don't know if it is catching or if something is off balance although I don't think this is the problem since the quadcopter tips different directions almost every time. If any one could tell me what is wrong I would appreciate it a lot since my project is due in 2 1/2 weeks.
Thanks in Advance
|
A reviewer of the last paper I sent replied me that the it is very dangerous to update a PID with next kind of formula (paper is about quadrotor control):
$$
K_p (t + 1) = K_p (t)+e(t) (μ_1 (Pe(t)) + μ_4 (Pe(t)))
$$
$Pe(t)$ is the % relationship between the desired angles and the real angles, and $e(t)$ is the difference between those angles. $μ_1$ and $μ_4$ are the membership functions of a fuzzy function. I think that the reviewer is talking about the time increment update rather than the fuzzy usage and specific formula.
How can stability of this formula be tested, please?
EDIT:
membership functions are represented in following graph:
$e(t)$ is not the absolute difference between angles, just the difference. It can be negative
|
In Tad McGeer's work, Passive Dynamic Walking, in 1990, he mentions the rimless wheel model, which is used to approximate the bipedal locomotion. I can't understand why the angular momentum is written as follows.
$H^-=(\cos 2\alpha_0+r^2_{gyr})ml^2\Omega^-$
I have the following questions:
Isn't the angular momentum be $I*\omega$, $m^2l\Omega$ as the paper's notation?
If $\alpha_0$ is $\frac{\pi}{2}$ and $r_{gyr}$ approaches to 0, shouldn't the angular momentum before impact, $H^-$, be negative? Then how the conservation goes?
|
I recently bought a RC car kit and after 10 minutes it stopped going.
When I throttle, I can see the motor trying to spin but it will just grind and get hot quite fast.
The motor does move if I disconnect it from the big gear, but not as fast as it did when new and it will still get very hot. Also, I can stop it with my fingers with a very slight touch.
I don't know anything about motors or ESCs, so I'm not sure if my problem is the motor or the ESC. Did I burn it out?
|
When looking at the robotic hands made by researchers that are said to be rather close to a real human hand, they can easily cost tens of thousands of dollars.
What makes them so much expensive? Sure there are lots of joints where parts must move, but it's still hard to figure out how it can cost so much even with highly precise servomotors.
What is so much expensive when trying to build a humanoid hand? How can we make it less expensive? What do these expensive hands can do, that a diy cheap hand project can't?
Thank you.
|
I'm trying to build a test-automation robotic arm which can repeatedly present an ID-tag (such as RFID or NFC card or fob) to a card reader.
I suspect our reader fails either (a) after hundreds of presentations or due to fast presentations or (b) at a specific moment in the reader duty cycle.
The tag needs to move in a well-controlled manner:
Quickly present the card,
Pause (mark)
Quickly remove the card,
Pause (space)
Repeat at 1.
I'm calling the present/remove sequence the mark-space ratio for simplicity.
The tests I want to perform involve varying (a) the frequency and (b) the mark-space ratio, to (a) stress-test and (b) boundary-test the re-presentation guard times built into the reader to debounce presentations.
The guard times are around 400ms, response around 100ms, so I need something that can move in and out of a 5-10cm range quickly and repeat within those sorts of timescales.
The distance the card needs to move depends on the reader model, as they have different field ranges. I want to get through the edge of the field quickly to avoid any inconsistencies in testing.
I'm able to do any programming (professional) and simple electromechanical design and build (ex-professional, now hobbyist). I only need to build one, it doesn't have to be particularly robust, but it does need to be fairly accurate with regard to the timings to do the second test.
What I've done so far:
I've built one version already using a Raspberry Pi, GPIO, a stepper motor with an aluminium arm screwed to a wheel. It works, but it's a bit jerky and too slow, even with a 30cm arm to amplify the motion. It will probably do for the repeat test, but it's not time-accurate enough to do the timing tests.
My other design ideas were:
Servo (are these also slow?)
Solenoid (fast, but too limited range? and might cause EM?)
Motor (too uncontrollable, and will require too much mechanical work for me)
Rotating drum (fast, stable, but cannot control mark-space ratio)
I'm not a electro-mechanical design expert, so I'm wondering if I'm missing an electrical device or mechanical design which can do this more easily.
|
I am presently doing a robotics project. I am using USARSIM (Urban Search and Rescue Simulation) to spawn a robot. I am trying to create different behaviors, like:
goal following behavior;
obstacle avoidance behavior, and;
wall following behavior for my robot.
I first generate the robots in USARSIM. Then I specify a goal location to the robot and provide it with a speed. The robot then moves to the goal location at the specified speed. USARSIM provides me the (x, y, z) coordinates of the vehicle at every time stamp. Based on the the coordinates received, I am trying to calculate the instantaneous speed of the robot at every time stamp. The instantaneous speed graph is fluctuating a lot.
In a specific case, I am providing the robot with 0.2 m/s. The velocity profile is shown below. I am unable to understand the reason behind it.
Here are some observations that I have made.
As I increase the speed of the robot, the variations are decreasing.
Suppose, I provide a straight trajectory to the robot, it doesn't follow the straight trajectory. Does it explain why my velocity profile is fluctuating a lot ?
Please let me know if any one can provide me a possible explanation for the variance in my velocity profile.
|
I need to compute the Voronoi diagram for a map with some obstacles but I can't find any pseudo-code or example in MATLAB.
The "voronoi" function in MATLAB works with points, but in this case the obstacles are polygons (convex and non-convex). You can see the map in the attached image.
Because the obstacles are polygons I found that the Voronoi algorithm needed is the GVD (Generalized Voronoi Diagram).
Can anyone help with code or examples on internet explaining how to compute this GVD?
|
I at moment trying to convince myself that what I need is a simple path planning algorithm, instead of linearly interpolating between a current and a desired state.
I am at moment working with an robot arm (UR) with an camera mounted on to its TCP. The application i am trying to create is a simple ball tracking application which tracks the movement of the ball, while always keeping the object within sight.
which meant that I needed some form of path planning algorithm which plans the path between my current state and the desired state. The path should should be such that the ball is always kept in focus while the arm moves to the desired state.
But then I began question myself whether it was a bit overkill, and whether a simple straight line interpolation wouldn't suffice?.. I am actual not sure what form of benefit i would have by choosing a pathplanner, than a simple interpolation..
Interpolation would also generate the path I desire, so why choose a pathPlanner at all?
would someone elaborate?
It should be noted that obstacle avoidance is also a part of the task, which could cause trouble for a straight line interpolating.
|
I am at moment trying to implement a visual servoing application.
the robot I am using is a UR5, and TCP has a stereo camera mounted on to it. The idea is to move the end effector according to the object being tracked.
The path-planning algorithm for this system should comply with some rules.
The path which it creates should be collision free, and always keep the object being tracked at sight at all time.
Having a path that keeps the object in sight has been bit of problem. Sometime will the end effector rotate around itself, messing up measurements taken and thus the tracking itself.
It should be able to maneuver away from static obstacles.
A Possible solution?
I thought of a possible solution. Since my current state and desired state is defined by two different sphere, A possible solution would be to create a straight line between each center of each sphere, and between the current position and desired position, such that a straight path in between could be computed easily. which always keeps itself oriented toward the object. Problems is that I am not sure how I should handle collision here.
Update
Or use it as a heuristic for a heuristic based path planning?
|
Good day I would just like to ask if a fixed wing aircraft such as a glider(without thrust capability therefore needs external forces such as air flow to move constraining its movement) can be considered a non-holonomic system considering the fact that it cannot move freely compared to a quadcopter that is holonomic.
I found this information from: What's the difference between a holonomic and a nonholonomic system?
Mathematically:
Holonomic system are systems for which all constraints are integrable into positional constraints.
Nonholonomic systems are systems which have constraints that are nonintegrable into positional constraints.
Intuitively:
Holonomic system where a robot can move in any direction in the configuration space.
Nonholonomic systems are systems where the velocities (magnitude and or direction) and other derivatives of the position are constraint.
|
I would like to build a visual SLAM robot (just for self-learning purpose) but I get frustrated how I know which processor and camera should be used for visual SLAM.
First, for the processor, I have seen three articles, which shows different systems are used for implementing their SLAM algorithm:
Implementing SLAM algorithm (however it uses ultrasonic sensor rather than visual sensor) in Raspberry Pi (processing power is only 700 MHz) in Implementing Odometry and SLAM Algorithms on a Raspberry Pi to Drive a Rover
I have also seen that Boston Dynamics use Pentium CPU, PC104 stack and QNX OS for their Big Dog project, BigDog Overview
November 22, 2008
Then, I also found a project uses a modern XILINX Zynq-7020 System-on-Chip (a device that combines FPGA resources with a dual ARM Cortex-A9 on
a single chip), for a Synchronized Visual-Inertial Sensor System, in A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM
But after reading those, I have no clue how they end up with those decisions to use those kinds of processors, stacks or even OSes for their project. Is there a mathematical way, or a general practice, to evaluate the minimum requirement of the system (as cheap and as power efficient as possible) for an algorithm to run?
If not, how could I know what processor or system I have to prepare for a visual SLAM robot? If there is no simple answer, it is also cool if you can recommend something I could read to have a good start.
Secondly, I also cannot find clear information which camera I should use for a visual SLAM robot. I also have no idea how they evaluate the minimum requirement of the camera. I found a lot of papers saying they use RGB-D camera but when I Google to find one, there are very few commercially available. The one I found is Xtion Pro Live from ASUS Global (for $170). Are there any practice I can choose a suitable camera system for visual SLAM too?
|
I have a 6DOF robotic arm which I am using to throw a ball. Each joint can achieve a maximum velocity of 30 RPM (180 deg/s). I have been trying to generate joint angles manually and feeding them to see how far I can throw the ball until now. This has shown me that it's like less than 2 meters.
But I feel that I may not be combining the motions of the various motors in order get better throwing distance. I wanted to know if there is a simple way of theoretically determining the maximum distance I can throw. I read a few papers that appear very complicated, I do not need a very accurate value, just an estimate so that I decide whether I should move to a different arm.
|
Is it possible to perform cosine interpolation between two transformation matrices?
It make sense for the translation part, but how about the rotational part?
|
I need to calculate the configuration space obstacle to planning a path with a mobile robot. The idea is to divide the obstacles and the robot in triangles and test whether is there any triangle from the robot colliding with any triangle from the obstacles.
The approach to solve this is to test this between two triangles each time so I need to look if any of the 6 edges (3 for each triangle) divide the triangles so 3 vertex from one of them lie in one side and the other 3 vertex lie on the other side of the line.
I wrote some code to calculate the line equation (y = m*x + b) and I think it is correct, but I am having problems when the line is vertical (this means that m = -Inf) because MATLAB gives me a NaN when I calculate the equation for it. I am not sure how to handle this.
Here you can see a snippet from the code where I test the 3 edges from the
robot triangle:
for i = 1:1:3
vertex_x = P1(edge(i,:),1);
vertex_y = P1(edge(i,:),2);
m = (vertex_y(2) - vertex_y(1))/(vertex_x(2) - vertex_x(1));
b = -m*vertex_x(1) + vertex_y(1);
for j = 1:1:6 % For each vertex...
pto = listaVertices(j,:);
if (m*pto(1) + b > pto(2))
% Vertex lies below the edge...
cont1 = cont1 + 1;
elseif (m*pto(1) + b < pto(2))
% Vertex lies above the edge...
cont2 = cont2 + 1;
else
% Vertex lie inside the edge...
% Do nothing
end
end
% 3 vertex on one side and 1 on the others side means they do not
% collide. Two of the vertex always lie inside the line (the two vertex
% of each edge).
if (cont1 == 1 && cont2 == 3) || (cont1 == 3 && cont2 == 1)
flag_aux = false; % Do not collide...
end
% Reset the counters for the 3 next edges...
cont1 = 0;
cont2 = 0;
end
Anyone could help with this issue?
|
I am not sure how i should explain this, I am looking for a way to plot the trajectory an robot arm. An object is seen from the toolFrame frame, but how do I plot the position of each joint, such that the frame they uses are the same.
One way would be to use the world frame as reference, but how would i plot the position of the object related to the world frame?
|
I tried to use Microsoft Robotics Dev Studio (sample 4) to write a code that was able for robot to go with a square path by just one clicked. However, there is one problem.
When I try to put DriveDistanceRequest and RotateDegreesRequest in a loop. It would only execute the last request. The problem is that Arbiter.Choice within the DriveDistance is activated immediately as soon as the drive operation starts. Did anyone have this kind of problem before? If so, how do I solve it? If no, how am I able to fix this problem? Thanks your so much.
//-----------------------------------------------------------------------
// This file is part of Microsoft Robotics Developer Studio Code Samples.
//
// Copyright (C) Microsoft Corporation. All rights reserved.
//
// $File: RoboticsTutorial4.cs $ $Revision: 22 $
//-----------------------------------------------------------------------
using Microsoft.Ccr.Core;
using Microsoft.Ccr.Adapters.WinForms;
using Microsoft.Dss.Core;
using Microsoft.Dss.Core.Attributes;
using Microsoft.Dss.ServiceModel.Dssp;
using Microsoft.Dss.ServiceModel.DsspServiceBase;
using System;
using System.Collections.Generic;
using System.Security.Permissions;
using xml = System.Xml;
using drive = Microsoft.Robotics.Services.Drive.Proxy;
using W3C.Soap;
using Microsoft.Robotics.Services.RoboticsTutorial4.Properties;
using Microsoft.Robotics.Services.Drive.Proxy;
using System.ComponentModel;
namespace Microsoft.Robotics.Services.RoboticsTutorial4
{
[DisplayName("(User) Robotics Tutorial 4 (C#): Drive-By-Wire")]
[Description("This tutorial demonstrates how to create a service that partners with abstract, base definitions of hardware services.")]
[DssServiceDescription("http://msdn.microsoft.com/library/bb483053.aspx")]
[Contract(Contract.Identifier)]
public class RoboticsTutorial4 : DsspServiceBase
{
[ServiceState]
private RoboticsTutorial4State _state = new RoboticsTutorial4State();
[ServicePort("/RoboticsTutorial4", AllowMultipleInstances=false)]
private RoboticsTutorial4Operations _mainPort = new RoboticsTutorial4Operations();
[Partner("Drive", Contract = drive.Contract.Identifier, CreationPolicy = PartnerCreationPolicy.UseExisting)]
private drive.DriveOperations _drivePort = new drive.DriveOperations();
private drive.DriveOperations _driveNotify = new drive.DriveOperations();
public RoboticsTutorial4(DsspServiceCreationPort creationPort) :
base(creationPort)
{
}
#region CODECLIP 02-1
protected override void Start()
{
base.Start();
WinFormsServicePort.Post(new RunForm(StartForm));
#region CODECLIP 01-5
_drivePort.Subscribe(_driveNotify);
Activate(Arbiter.Receive<drive.Update>(true, _driveNotify, NotifyDriveUpdate));
#endregion
}
#endregion
#region CODECLIP 02-2
private System.Windows.Forms.Form StartForm()
{
RoboticsTutorial4Form form = new RoboticsTutorial4Form(_mainPort);
Invoke(delegate()
{
PartnerType partner = FindPartner("Drive");
Uri uri = new Uri(partner.Service);
form.Text = string.Format(
Resources.Culture,
Resources.Title,
uri.AbsolutePath
);
}
);
return form;
}
#endregion
#region CODECLIP 02-3
private void Invoke(System.Windows.Forms.MethodInvoker mi)
{
WinFormsServicePort.Post(new FormInvoke(mi));
}
#endregion
/// <summary>
/// Replace Handler
/// </summary>
[ServiceHandler(ServiceHandlerBehavior.Exclusive)]
public virtual IEnumerator<ITask> ReplaceHandler(Replace replace)
{
_state = replace.Body;
replace.ResponsePort.Post(DefaultReplaceResponseType.Instance);
yield break;
}
[ServiceHandler(ServiceHandlerBehavior.Concurrent)]
//stop
public virtual IEnumerator<ITask> StopHandler(Stop stop)
{
drive.SetDrivePowerRequest request = new drive.SetDrivePowerRequest();
request.LeftWheelPower = 0;
request.RightWheelPower = 0;
yield return Arbiter.Choice(
_drivePort.SetDrivePower(request),
delegate(DefaultUpdateResponseType response) { },
delegate(Fault fault)
{
LogError(null, "Unable to stop", fault);
}
);
}
//forward
#region CODECLIP 01-3
[ServiceHandler(ServiceHandlerBehavior.Concurrent)]
//forward
public virtual IEnumerator<ITask> ForwardHandler(Forward forward)
{
if (!_state.MotorEnabled)
{
yield return EnableMotor();
}
// movement speed
// This sample sets the power to 75%.
// Depending on your robotic hardware,
// you may wish to change these values.
drive.SetDrivePowerRequest request = new drive.SetDrivePowerRequest();
request.LeftWheelPower = 0.5;
request.RightWheelPower = 0.5;
yield return Arbiter.Choice(
_drivePort.SetDrivePower(request),
delegate(DefaultUpdateResponseType response) { },
delegate(Fault fault)
{
LogError(null, "Unable to drive forwards", fault);
}
);
}
#endregion
[ServiceHandler(ServiceHandlerBehavior.Concurrent)]
// backup speed
public virtual IEnumerator<ITask> BackwardHandler(Backward backward)
{
if (!_state.MotorEnabled)
{
yield return EnableMotor();
}
drive.SetDrivePowerRequest request = new drive.SetDrivePowerRequest();
request.LeftWheelPower = -0.6;
request.RightWheelPower = -0.6;
yield return Arbiter.Choice(
_drivePort.SetDrivePower(request),
delegate(DefaultUpdateResponseType response) { },
delegate(Fault fault)
{
LogError(null, "Unable to drive backwards", fault);
}
);
}
[ServiceHandler(ServiceHandlerBehavior.Concurrent)]
// left turn speed
public virtual IEnumerator<ITask> TurnLeftHandler(TurnLeft turnLeft)
{
if (!_state.MotorEnabled)
{
yield return EnableMotor();
}
drive.SetDrivePowerRequest request = new drive.SetDrivePowerRequest();
request.LeftWheelPower = -0.5;
request.RightWheelPower = 0.5;
yield return Arbiter.Choice(
_drivePort.SetDrivePower(request),
delegate(DefaultUpdateResponseType response) { },
delegate(Fault fault)
{
LogError(null, "Unable to turn left", fault);
}
);
}
[ServiceHandler(ServiceHandlerBehavior.Concurrent)]
// right turn speed
public virtual IEnumerator<ITask> TurnRightHandler(TurnRight forward)
{
if (!_state.MotorEnabled)
{
yield return EnableMotor();
}
drive.SetDrivePowerRequest request = new drive.SetDrivePowerRequest();
request.LeftWheelPower = 0.5;
request.RightWheelPower = -0.5;
yield return Arbiter.Choice(
_drivePort.SetDrivePower(request),
delegate(DefaultUpdateResponseType response) { },
delegate(Fault fault)
{
LogError(null, "Unable to turn right", fault);
}
);
}
#region CODECLIP 01-4
private Choice EnableMotor()
{
drive.EnableDriveRequest request = new drive.EnableDriveRequest();
request.Enable = true;
return Arbiter.Choice(
_drivePort.EnableDrive(request),
delegate(DefaultUpdateResponseType response) { },
delegate(Fault fault)
{
LogError(null, "Unable to enable motor", fault);
}
);
}
#endregion
#region CODECLIP 01-6
private void NotifyDriveUpdate(drive.Update update)
{
RoboticsTutorial4State state = new RoboticsTutorial4State();
state.MotorEnabled = update.Body.IsEnabled;
_mainPort.Post(new Replace(state));
}
#endregion
// Here is where I had change the code.
#region Test Code (Creating Path)
[ServiceHandler(ServiceHandlerBehavior.Concurrent)]
public virtual IEnumerator<ITask> PathHandler(StartPath path)
{
if (!_state.MotorEnabled)
{
yield return EnableMotor();
}
for(int i=1; i<3; i++)
{
if(i == 1)
{
drive.DriveDistanceRequest distance = new drive.DriveDistanceRequest();
distance.Power = 1;
distance.Distance = 1;
yield return Arbiter.Choice(
_drivePort.DriveDistance(distance),
delegate(DefaultUpdateResponseType response) { },
delegate(Fault fault)
{
LogError(null, "Unable to turn left", fault);
}
);
}
else if(i == 2)
{
drive.RotateDegreesRequest rotate = new drive.RotateDegreesRequest();
rotate.Power = 1;
rotate.Degrees = 90;
yield return Arbiter.Choice(
_drivePort.RotateDegrees(rotate),
delegate(DefaultUpdateResponseType response) { },
delegate(Fault fault)
{
LogError(null, "Unable to turn left", fault);
}
);
}
}
}
#endregion
}
}
|
I am learning about robot kinematics and the Jacobian matrix, and I'm trying to understand how to compute the Jacobian matrix given a kinematic chain, such as a robot arm. I understand the theory behind the Jacobian matrix, but I'm not sure actually how it would be calculated in practice.
So, let's say that I have a 7 DoF robot arm, with 7 joints and 6 links between the joints. I know how to compute the transformation matrix between each joint, and by applying forward kinematics, I know the pose of the end effector for any configuration of joint angles. To calculate this, I have written some code which stores each transformation matrix, and then multiplies them in series to create the transformation matrix between the first joint and the end effector.
However, how do I now go about computing the Jacobian matrix? My solution so far, is to write down each transformation matrix by hand, then multiply them all by hand, to yield the overall transformation matrix, with respect to the joint angles. I could then differentiate this to create the Jacobian matrix. The problem with this though, is that the maths becomes very, very complicated as I move along the kinematic chain. By the end, there are so many terms as a result of the multiple matrix multiplications, that it just becomes so tedious doing this by hand.
Is there a better way to do this? In the case of calculating the forward kinematics, I didn't have to do it by hand, I just wrote some code to multiply the individual matrices. But when I want the Jacobian matrix, it seems like I need to compute the derivative of the overall transformation matrix after it has been computed, and so I need to do this by hand. What's the standard solution to this? Is it something to do with the chain rule for differentiation...? I'm not sure exactly how this applies here though...
Thank you!
|
Problem
Currently working on reverse engineering this application zepp.com/baseball. This is a wearable device that can track a users
speed
positional tracking
when the object makes contact with another one
3-D Rendering
Currently using an accelerometer and gyroscope to get the yaw, pitch, and roll(orientation) of the device, but do not know how to use that information to calculate speed, or if the device has collided with another object?
|
Assuming a drone is in two dimension, it has to predict its future position by calculating its future displacement:
For a real quad-rotor, why should we not only estimate the displacement of a robot in three dimensions but also the change of orientation of the robot, its linear velocity and its angular velocity?
|
Good day
I am currently implementing the VFH algorithm.
Is it possible to configure the algorithm such that a reactionary motion is generated at the presence of an obstacle?
I have been able to generate the obstacle map, primary polar histogram and the binary polar histogram.
How does one prioritize a sector to pass through?
I have seen an implementation in labview where in it is possible to implement a simple vector field histogram path planning without any goal points here
|
I am trying to implement my own inverse kinematics solver for a robot arm. My solution is a standard iterative one, where at each step, I compute the Jacobian and the pseudo-inverse Jacobian, then compute the Euclidean distance between the end effector and the target, and from these I then compute the next joint angles by following the gradient with respect to the end effector distance.
This achieves a reasonable, smooth path towards the solution. However, during my reading, I have learned that typically, there are in fact multiple solutions, particularly when there are many degrees of freedom. But the gradient descent solution I have implemented only reaches one solution.
So my questions are as follows:
How can I compute all the solutions? Can I write down the full forward kinematics equation, set it equal to the desired end effector position, and then solve the linear equations? Or is there a better way?
Is there anything of interest about the particular solution that is achieved by using my gradient descent method? For example, is it guaranteed to be the solution that can be reached the fastest by the robot?
Are there cases when the gradient descent method will fail? For example, is it possible that it could fall into a local minimum? Or is the function convex, and hence has a single global minimum?
|
I'm reading a paper:
Choi C, Trevor A J B, Christensen H I. RGB-D edge detection and
edge-based registration[C]//Intelligent Robots and Systems (IROS),
2013 IEEE/RSJ International Conference on. IEEE, 2013: 1568-1575.
which refers:
Visual features such as corners, keypoints, edges, and color are
widely used in computer vision and robotic perception for applications
such as object recognition and pose estimation, visual odometry, and SLAM
I previously assume pose estimation to be roughly equal to visual odometry, yet the text above seems to deny.
So what's their difference? I didn't find much info from google. IMHO, it seems pose estimation is estimating the pose of moving object with the camera static, while visual odometry is estimating the pose of camera in a static(mostly) scene, is that precise enough?
|
I am trying to run this motor.
Using the batteries stated in the title. The motor requires 12 V and I am supplying 11.98V to the motor, through a motor driver. After a while, the motor keeps slowing down and the battery voltage drops down to 5-6 V, but after I remove the battery from the motor driver it again shows 11.9V.
Is this battery capable enough to run my motors, or do I need a new one?
|
I am using Mavlink protocol (in c++) to communicate with the ArduPilotMega, I am able to read messages such as ATTITUDE for example.
I am currently getting only 2Hz (message rate) and I would like to increase it. I found out that I should use MESSAGE_INTERVAL in order to change it, and that I probably need to use the command MAV_CMD_SET_MESSAGE_INTERVAL to set it.
So my question is, how do I send that command using mavlink in c++?
I tried doing this with the code below but it did not work, I guess that I should use the command that I mentioned above but I don't know how.
mavlink_message_t command;
mavlink_message_interval_t interval;
interval.interval_us = 100000;
interval.message_id = 30;
mavlink_msg_message_interval_encode(255, 200, &command, &interval);
p_sensorsPort->write_message(command);
Update: I also tried this code below, maybe I am not giving it the right system id or component id.
mavlink_message_t command;
mavlink_command_long_t interval;
interval.param1 = MAVLINK_MSG_ID_ATTITUDE;
interval.param2 = 100000;
interval.command = MAV_CMD_SET_MESSAGE_INTERVAL;
interval.target_system = 0;
interval.target_component = 0;
mavlink_msg_command_long_encode(255, 0, &command, &interval);
p_sensorsPort->write_message(command);
Maybe I am missing something about the difference between target_system, target_component and sysid, compid. I tried few values for each but nothing worked. Is there any ack that will be able to tell me if it even got the command?
|
Good day
Note: I have found out that my code works. I have placed a minor explanation to be further expounded.
I have been having trouble obtaining the right directional output from my implementation. I noticed that every time I put an obstacle at the right, it gives left, it gives the right steering direction, the problem is with the presence of a left obstacle where it still tends to steer itself towards that obstacle. I have checked the occupancy map generated using matlab and was found to be correct. I couldn't pinpoint what is exactly wrong with my code for I have been debugging this for almost a week now and was hoping if someone can see the error I cannot.
Here is my code implementation:
//1st:: Create Occupancy Grid from Data-------------------------------------------------
// > Cell Size/Grid Resolution = 5meters/33 cells = 0.15meters each = 15cm each
// > Grid Dimension = 5meters by 5meters / 33x33 cells //Field of view of robot is 54 degrees
//or 31 meters horizontal if subject is 5 meters away
// > Robot Width = 1meter 100cm
// > Because the focal length of the lens is roughly the same as the width of the sensor,
// it is easy to remember the field of view: at x meters away, you can see about x meters horizontally,
// assuming 4x3 stills mode. Horizontal field of view in 1080p video mode is 75%
// of that (75% H x 55% V sensor crop for 1:1 pixels at 1920x1080).
//Converting the Position into an Angle--------------------------------------------
//from:
// A. https://decibel.ni.com/content/docs/DOC-17771
// B. "USING THE SENSOR KINECT FOR LANDMARK" by ANDRES FELIPE ECHEVERRI GUEVARA
//1. Get angle
// > Each pixel from the image represents an angle
// > angle = ith pixel in row * (field of view in degrees/number of pixels in row)
// > field of view of Pi camera is 54 degrees horizontal
//2. Convert Polar to Cartesian
// > x = z*cos(angle)
// > y = z*sin(angle)
int arrOccupancyGrid[33][33];
float matDepthZ[33][33];
int robotPosX = 0;
int robotPosY = 0;
int xCoor=0; //Coordinates of Occupancy Map
int yCoor=0;
int xPosRobot=0; //Present cordinates of robot
int yPosRobot=0;
float fov = 54; // 54 degrees field of view in degrees must be converted to radians
float nop = 320; //number of pixels in row
int mapDimension = 33; // 33by33 array or 33*15cm = 5mby5m grid
int mapResolution = 15; //cm
//Limit max distance measured
/*
for(i=0; i< nop ;i++){
if(arrDepthZ.at(i)>500){
arrDepthZ.at(i) = 500;
}
}
*/
for (i=0 ; i < nop; i++){
//Process data/Get coordinates for mapping
//Get Angle
int angle = ((float)(i-160.0f) * ((float)fov/(float)nop)); //if robot is centered at zero add -160 to i
//cout << "arrDepthZ " << arrDepthZ.at(i) << endl;
//cout << "i " << i << endl;
//cout << "fov " << fov << endl;
//cout << "nop " << nop << endl;
//cout << "angle " << i * (fov/nop) << endl;
arrAngle.push_back(angle);
//Get position X and Y use floor() to output nearest integer
//Get X --------
xCoor = (arrDepthZ.at(i) / mapResolution) * cos(angle*PI/180.0f); //angle must be in radians because cpp
//cout << "xCoor " << xCoor << endl;
arrCoorX.push_back(xCoor);
//Get Y --------
yCoor = (arrDepthZ.at(i) / mapResolution) * sin(angle*PI/180.0f); //angle must be in radians because cpp
//cout << "yCoor " << yCoor << endl;
arrCoorY.push_back(yCoor);
//Populate Occupancy Map / Cartesian Histogram Grid
if((xCoor <= 33) && (yCoor <= 33)){ //Condition Check if obtained X and Y coordinates are within the dimesion of the grid
arrOccupancyGrid[xCoor][yCoor] = 1; //[increment] equate obstacle certainty value of cell by 1
matDepthZ[xCoor][yCoor] = arrDepthZ.at(i);
}
//cout << "arrCoorX.size()" << arrCoorX.size() << endl;
//cout << "arrCoorY.size()" << arrCoorY.size() << endl;
}
for (i=0 ; i < arrCoorX.size(); i++){
file43 << arrCoorX.at(i) << endl;
}
for (i=0 ; i < arrCoorY.size(); i++){
file44 << arrCoorY.at(i) << endl;
}
for (i=0 ; i < arrDepthZ.size(); i++){
file45 << arrDepthZ.at(i) << endl;
}
//------------------------- End Create Occupancy Grid -------------------------
//2nd:: Create 1st/Primary Polar Histogram ------------------------------------------------------
//1. Define angular resolution alpha
// > n = 360degrees/alpha;
// > set alpha to 5 degrees resulting in 72 sectors from 360/5 = 72 ///// change 180/5 = 35
//2. Define number of sectors (k is the sector index for sector array eg kth sector)
// > k=INT(beta/alpha), where beta is the direction from the active cell
//to the Vehicle Center Point (VCP(xPosRobot, yPosRobot)). Note INT asserts k to be an integer
cout << "2nd:: Create 1st/Primary Polar Histogram" << endl;
//Put this at the start of the code away from the while loop ----------------
int j=0;
int sectorResolution = 5; //degrees 72 sectors, alpha
int sectorTotal = 36; // 360/5 = 72 //// change 180/5 = 36
int k=0; //sector index (kth)
int maxDistance = 500; //max distance limit in cm
//vector<int>arrAlpha; //already initiated
float matMagnitude[33][33]; //m(i,j)
float matDirection[33][33]; //beta(i,j)
float matAngleEnlarge[33][33]; //gamma(i,j)
int matHconst[33][33]; //h(i,j) either = 1 or 0
float robotRadius = 100; //cm
float robotSafeDist = 50; //cm
float robotSize4Sector = robotRadius + robotSafeDist;
for (i=0; i<sectorTotal; i++){
arrAlpha.push_back(i*sectorResolution);
}
//---------end initiating sectors----------
//Determine magnitude (m or matMagnitude) and direction (beta or matDirection) of each obstacle vector
//Modify m(i,j) = c(i,j)*(a-bd(i,j)) to m(i,j) = c(i,j)*(dmax-d(i,j)) from sir Lounell Gueta's work (RAL MS)
//Compute beta as is, beta(i,j) = arctan((yi-yo)/(xi-xo))
//Enlarge robot and compute the enlargment angle (gamma or matAngleEnlarge)
int wew =0;
int firstfillPrimaryH = 0; //flag for arrayPrimaryH storage
for (k=0; k<sectorTotal; k++){
for (i=0; i<mapDimension; i++){
for (j=0; j<mapDimension; j++){
//cout << "i" << i << "j" << j << "k" << k << endl;
//cout << "mapDimension" << mapDimension << endl;
//cout << "sectorTotal" << sectorTotal << endl;
//Compute magnitude m, direction beta, and enlargment angle gamma
matMagnitude[i][j] = (arrOccupancyGrid[i][j])*( maxDistance-matDepthZ[i][j]); //m(i,j)
//cout << "matMagnitude[i][j]" << (arrOccupancyGrid[i][j])*( maxDistance-matDepthZ[i][j]) << endl;
matDirection[i][j] = ((float)atan2f( (float)(i-yPosRobot), (float)(j-xPosRobot))*180.0f/PI); //beta(i,j)
//cout << "matDirection[i][j]" << ((float)atan2f( (float)(i-yPosRobot), (float)(j-xPosRobot))*180.000/PI) << endl;
//cout << "matDepthZ[i][j]" << matDepthZ[i][j] << endl;
if(matDepthZ[i][j] == 0){ //if matDepthZ[i][j] == 0; obstable is very far thus path is free, no enlargement angle
matAngleEnlarge[i][j] = 0; //gamma(i,j)
//cout << "matAngleEnlarge[i][j]" << 0 << endl;
}
else{ //if matDepthZ[i][j] > 0 there is an obstacle so compute enlargement angle
matAngleEnlarge[i][j] = asin( robotSize4Sector / matDepthZ[i][j])*180/PI; //gamma(i,j)
//cout << "matAngleEnlarge[i][j]" << asin( robotSize4Sector / matDepthZ[i][j])*180.0f/PI << endl;
}
wew = k*sectorResolution; //k*alpha
//cout << "wew" << k*sectorResolution << endl;
//Check if magnitude is a part of the sector
if ( ((matDirection[i][j]-matAngleEnlarge[i][j]) <= wew) && (wew <= (matDirection[i][j]+matAngleEnlarge[i][j])) ){
matHconst[i][j]=1; //Part of the sector
//cout << "Part of the sector ---------------------------------------------------------------" << endl;
//cout << "matHconst[i][j]=1" << matHconst[i][j] << endl;
}
else{
matHconst[i][j]=0; //Not Part of the sector
//cout << "Not Part of the sector" << endl;
//cout << "matHconst[i][j]=0" << matHconst[i][j] << endl;
}
//Compute primary polar histogram Hp(k)
//cout << "firstfillPrimaryH" << firstfillPrimaryH << endl;
if (firstfillPrimaryH==0){ //If first fill at sector
//cout << "matMagnitude[i][j]" << matMagnitude[i][j] << endl;
//cout << "matHconst[i][j]" << matHconst[i][j] << endl;
float temp = matMagnitude[i][j]*matHconst[i][j];
//cout << "matMagnitude[i][j]*matHconst[i][j]" << temp << endl;
arrPrimaryH.push_back(temp);
firstfillPrimaryH=1; //Trigger flag
//cout << "arrPrimaryH kth" << arrPrimaryH.at(k) << endl;
}
else{ //If sector filled previously
arrPrimaryH.at(k) = arrPrimaryH.at(k)+(matMagnitude[i][j]*matHconst[i][j]);
//cout << "arrPrimaryH kth" << arrPrimaryH.at(k) << endl;
}
}
}
firstfillPrimaryH=0; //Reset flag
}
|
This question is strongly related to my other question over here.
I am estimating 6-DOF poses $x_{i}$ of a trajectory using a graph-based SLAM approach. The estimation is based on 6-DOF transformation measurements $z_{ij}$ with uncertainty $\Sigma_{ij}$ which connect the poses.
To avoid singularities I represent both poses and transforms with a 7x1 vector consisting of a 3D-vector and a unit-quaternion:
$$x_{i} = \left( \begin{matrix} t \\ q \end{matrix} \right)$$
The optimization yields 6x1 manifold increment vectors
$$ \Delta \tilde{x}_i = \left( \begin{matrix} t \\ log(q) \end{matrix} \right)$$
which are applied to the pose estimates after each optimization iteration:
$$ x_i \leftarrow x_i \boxplus \Delta \tilde{x}_i$$
The uncertainty gets involved during the hessian update in the optimization step:
$$ \tilde{H}_{[ii]} += \tilde{A}_{ij}^T \Sigma_{ij}^{-1} \tilde{A}_{ij} $$
where
$$ \tilde{A}_{ij} \leftarrow A_{ij} M_{i} = \frac{\partial e_{ij}(x)}{\partial x_i} \frac{\partial x_i \boxplus \Delta \tilde{x}_i}{\partial \Delta x_i} |_{\Delta \tilde{x}_i = 0}$$
and
$$ e_{ij} = log \left( (x_{j} \ominus x_{i}) \ominus z_{ij} \right) $$
is the error function between a measurement $z_{ij}$ and its estimate $\hat{z}_{ij} = x_j \ominus x_i$. Since $\tilde{A}_{ij}$ is a 6x6 matrix and we're optimizing for 6-DOF $\Sigma_{ij}$ is also a 6x6 matrix.
Based on IMU measurements of acceleration $a$ and rotational velocity $\omega$ one can build up a 6x6 sensor noise matrix
$$ \Sigma_{sensor} = \left( \begin{matrix} \sigma_{a}^2 & 0 \\ 0 & \sigma_{\omega}^2 \end{matrix} \right) $$
Further we have a process model which integrates acceleration twice and rotational velocity once to obtain a pose measurement.
To properly model the uncertainty both sensor noise and integration noise have to be considered (anything else?). Thus, I want to calculate the uncertainty as
$$ \Sigma_{ij}^{t} = J_{iterate} \Sigma_{ij}^{t-1} J_{iterate}^T + J_{process} \Sigma_{sensor} J_{process}^T$$
where $J_{iterate} = \frac{\partial x_{i}^{t}}{\partial x_{i}^{t-1}}$ and $J_{process} = \frac{\partial x_{i}^{t}}{\partial \xi_{i}^{t}}$ and current measurement $\xi{i}^{t} = [a,\omega]$.
According to this formula $\Sigma_{ij}$ is a 7x7 matrix, but I need a 6x6 matrix instead. I think I have to include a manifold projection somewhere, but how?
For further details take a look at the following publication, especially at their algorithm 2:
G. Grisetti, R. Kümmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intelligent Transportation Systems Maga- zine, vol. 2, no. 4, pp. 31–43, 2010.
For a similar calculation of the uncertainty take a look at the end of section III A. in:
Corominas Murtra, Andreu, and Josep M. Mirats Tur. "IMU and cable encoder data fusion for in-pipe mobile robot localization." Technologies for Practical Robot Applications (TePRA), 2013 IEEE International Conference on. IEEE, 2013.
.. or section III A. and IV A. in:
Ila, Viorela, Josep M. Porta, and Juan Andrade-Cetto. "Information-based compact Pose SLAM." Robotics, IEEE Transactions on 26.1 (2010): 78-93.
|
I'm working on a robotics project where I have 3 services running. I have my sensor DAQ, my logic ISR (motor controller at 20kHz) and my EtherCAT slave controller.
DAQ and EtherCAT run in the idle and the logic runs during an interrupt. The logic does some calculations and controls the motor. The EtherCAT service (kinda like CANbus) runs together with my DAQ in the idle loop. I can not run the DAQ in the interrupt because that leaves me with less than 100ns for the EtherCAT service to run.
I'm not sure whether this is the right way to do this especially considering all the scary things i've read regarding data corruption when using interrupts.
Does anyone have some nice ideas on how to handle these services?
I'm running all my code on a Zynq 7020 (on the ARM Cortex) and it's written in C++.
Here is an example of my code:
/**
* Get all sensor data
*/
void Supervisor::communication(void) {
// Get all the sensors data
dispatchComm.getData(&motorAngle, &motorVelocity, &jointAngle, &springAngle, &tuningParameter);
}
/**
* Run all the logic
*/
void Supervisor::logic(void) {
dispatchLogic.calculate(tuningParameter, motorAngle, motorVelocity, jointAngle, springAngle);
dispatchLogic.getData(&angle, &magnitude);
// Dispatch values to the motor drive
dispatchComm.setMotorDriveSetpoint(angle, magnitude);
dispatchComm.writeToPiggyback((uint32_t) (tuningParameter), motorAngle, motorVelocity);
}
|
PLease guide me
How to find Friction or Viscous force b (nmsec) in DC motor for a particlar speed.
The motor is connected with a gear and the ration is 26:1
I want to find for 200 rpm and the motor no load speed is 4900rpm
please guide me
|
Since the encoder is square wave not quadrature, do you have to stop first before changing directions for proper measurements?
In other words, if you are commanding along in one direction at some low speed like 50mm/s or less and want to change direction to -50mm/s, would you first need command it to zero and wait for the encoder to read 0 speed, and then command the reverse direction, in order to get as accurate as possible encoder readings?
|
I am trying to run 2 12V Geared DC motors which has No-load current = 800 mA(Max), Load current = upto 9.5 A(Max). Runtime to be atleast 3-4 hours.
The motor takes about 10-12 V for operation.
I need a proper battery pack for these, but how can I determine the specs I should go for?
|
Good day,
Introduction
I am currently working on an autonomous quadcopter project. I have currently implemented a cascaded PID controller consisting of two loops. The inner rate loop takes the angular velocity from the gyroscope as measurements. The outer stabilize/angle loop takes in angle measurements from the complementary filter (gyroscope + accelerometer angles).
Question:
I would like to ask if it is effective to cascade a Lateral Velocity (X and Y - axis) PID controller to the the Angle Controller (Roll and Pitch) to control drift along the X-Y plane. For the outermost PID controller, the setpoint is 0 m/s with the measured velocities obtained from integrating linear accelerations from the accelerometer. This then controls the PID controller responsible for the Pitch (if Y velocity PID) and Roll (if X velocity PID).
|
For my final year in Computer Science university I will be doing a dissertation that includes controlling a drone through computer and communication with an onboard camera for computer vision.
The first step is obtaining a drone that suits my needs, and I have no clue how to go about it. Basically what is needed is a drone that will be able to communicate with a computer both for its movement and to "stream" the video to the computer for analysis.
So, would I go for a store bought drone, a rasperry pi or some other microcontroller based one etc. What do I need to take into consideration etc?
P.S. the project is going to be based indoors, so I don't need crazy range, or very powerf
|
The other day, somebody was telling me about a robot in their lab, and they mentioned that it has "series elastic" actuators. But after doing a bit of Googling, I'm still not sure as to what this means, and have been unable to find a simple explanation. It seems that it is something to do with the link between the actuator and the load having a spring-like quality to it, but this is rather vague...
In any case, the what I am really interested in is the advantages and disadvantages of series elastic actuators. Specifically, I have read that one of the advantages is that it allows for "more accurate and stable force control". However, this appears counter-intuitive to me. I would have thought that if the link between the actuator and the load was more "springy", then this would lower the ability to have accurate control over the force send to the load, because more of this force would be stored and dissipated in the spring, with less directly transferred to the load.
So: Why do series elastic actuators have "more accurate and stable force control"?
|
Suppose I have one robot with two 3D position sensors based on different physical principles and I want to run them through a Kalman filter. I construct an observation matrix two represent my two sensors by vertically concatenating two identity matrices.
$H = \begin{bmatrix} 1&0&0\\0&1&0\\0&0&1\\1&0&0\\0&1&0\\0&0&1 \end{bmatrix}$ $\hspace{20pt}$
$\overrightarrow x = \begin{bmatrix} x\\y\\z \end{bmatrix}$
so that
$H \overrightarrow x = \begin{bmatrix} x\\y\\z\\x\\y\\z \end{bmatrix}$
which represents both sensors reading the exact position of the robot. Makes sense so far. The problem comes when I compute the innovation covariance
$S_k = R + HP_{k|k-1}H^T$
Since
$H H^T = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 & 0 & 1 \\
\end{bmatrix}$
then, no matter what $P$ is, I'm going to wind up with $x$ innovations from the first sensor being correlated to $z$ innovations from the second, which seems intuitively wrong, if I'm interpreting this right.
Proceeding from here, my gain matrix ($K = P_{k|k-1} H^T S_k^{-1}$) winds up doing some pretty odd stuff (swapping rows and the like) so that, when updating a static system ($A = I_3, B = [0]$) with a constant measurement $\overrightarrow z = [1,0,0]$ I wind up with a predicted state $\hat x = [0,0,1]$.
If I separate the sensors and update the filter with each measurement separately, then $H H^T = I_3$, and I get sensible results.
I think I am confused about some technical points in one or more of these steps. Where am I going wrong? Does it not make sense to vertically concatenate the observation matrices?
I suppose that I could just set the off-diagonal 3x3 blocks of $S_k$ to 0, since I know that the sensors are independent, but is there anything in the theory that suggests or incorporates this step?
|
I'm looking to find out, How do human-like legs compare to chicken legs and four-leg systems. In terms of cost, performance, speed, strength and accuracy.
I'm interested in things like speed, agility, turning radius, complexity and cost.
For a design large enough for a person to ride, rider fatigue is also important -- how do they compare in terms of achievable ride smoothness, vibration, and so on?
Are there quantitative benefits of 3 DOF hip joints, compared to 2 DOF?
I realize other factors will come into play as well, such as actuators, joint designs and control systems.
However, my interest at the moment is how basic leg designs compare to one another.
Edit: I'm looking for someone who has used these mechanisms first hand.
|
I just started learning about slam and I have been trying to simulate a robot moving around a set of landmarks for the past 3 days. The landmarks have known correspondences.
My problem is, if I add motion noise to the covariance matrix in the prediction step, the robot starts to behave very weirdly. If I don't add motion noise in the prediction step, the robot will move around perfectly. I have been trying to figure out why this is happening for 3 days now but cannot find anything wrong with my code.
I have attached a link to github which has all the files pertaining to my project. In the folder named 'octave' the file 'prediction_step' and 'correction_step' contains code for the prediction and correction steps respectively. The ekf_slam file is the main loop which calls the above two functions.
My github repository also contains 3 videos which correspond to robot with no motion noise, robot with motion noise and another video which shows how the robot should ideally go about.
Please help me in figuring out what is wrong with my code in 'prediction_step' and 'correction_step'.
Link to my github repository: please click here
|
How do I linearize the following system using taylor series expansion:
$\dot x = v cos\theta \\ \dot y = v sin\theta \\ \dot \theta = u$
Here, $\theta$ is the heading direction of my robot, measured counter clockwise with respect to $x$ axis.
$v$ is the linear velocity of the robot,
$u$ is the angular velocity of the robot.
|
I have a problem with two robots and two obstacles in a space. Each robot can communicate its measurements to the other and can measure angles and distances.
The two obstacles in the environment are identical to each other.
Each robot can see both obstacles but not each other. Therefore have angle theta 1 and 2 combined with distance 1 and 2. Can the distance between the two robots be calculated?
So far I have placed circles with radius of the measured distance over each landmark (triangles in my workings), this provides 4 possible positions for each robot. Red and black circles correspond to robot 1 and blue and green to robot 2. Using the relative size of the angle measurements I can discount two of these positions per robot.
This still leaves me with two possible positions for each robot shown with the filled or hashed circles.
Is it possible to calculate which side the robot is to the landmarks and the distance between each other?
Robot 1 only has the two measurements of angle and distance and can therefore assign an id to each obstacle, but when information is transmitted to robot 2, robot 2 does not know which obstacle will have been designated an id of 1 or 2.
|
I am working through the book Learning Robotics using Python, which is for Python programmers who want to learn some robotics. Chapter 2 shows you how to use LibreCAD to design the plates and poles that form the chassis of the turtlebot-like robot. For instance, the base plate looks like this:
Then, there is nothing more about it but suddenly in a later chapter there is a picture of the fully manufactured plates assembled into a chassis, and the author acts as if this should just be something we know how to do:
How did he do that? We have these CAD drawings, and suddenly there are these plates that were manufactured via some magical process the book never discusses, he never gives things like tolerances, the material that they are supposed to be made out of, etc., the kinds of things discussed here:
http://www.omwcorp.com/how-to-design-machined-parts.html
I know nothing about this stuff, in terms of how to go from CAD design specs to getting an actual part manufactured. What kinds of companies should I use, what is a reasonable price to expect, what is the process?
In general, how do I go from CAD design to manufactured item? Do I find a local machine shop that specializes in robotics, bring my CAD drawings, and work with them to try to build the parts?
I am totally a noob, I hope this isn't a question like this:
http://blog.stackoverflow.com/2010/11/qa-is-hard-lets-go-shopping/
|
For the following controller what do $q_{des}$ and $q_{act}$ stand for? Also, what is the general principle of this controller?
Thanks!
|
is it possible to enhance (or redirect) the earth's magnetic field in a room or house so that one can write a small program that makes smartphones with hall-effect sensors detect more reliably in which direction they are pointing?
I presume a fridge magnet won't do the job...
|
I read most of the iRobot Create 2 Open Interface (OI). It says send these serial commands to the Create 2 to get it to do the described action, but no suggestion of what software to use to send these serial commands through the USB interface. I did install the FTDI Drivers to enable the USB to serial connection. Question: What serial software should I use to communicate with Create 2? Is there a tool to verify that the supplied usb to serial cable supplied with Create 2 is functioning and if the Create 2 is functioning? (I did a reset on Create 2 using Spot and Dock buttons)
|
If I understand the manual, each leg in each of the 7 segment displays is labeled with a letter A-G. These letters then map to specific bits in a byte - 1 byte for each of the 4 displays. Setting a bit turns on the corresponding leg while not setting it leaves it off.
With this understanding, I tried to turn on all the A segments by sending
[163][1][1][1][1]
Instead of the A segment in each display turning on, the displays all showed a 1. Further testing shows that if I send the numbers 1-9 for any of the displays, they will display the number sent. Sending the number 10 or greater turns on various combinations of the segments.
I was able to activate individual segments with the following numbers:
63 G
64 A
65 B
66 C
67 D
68 E
69 F
However, I haven't been able to determine how the bytes sent affect the individual segments. Either I don't understand the manual or Digit LEDs Raw does not work as the manual specifies.
UPDATE 03JUNE2016
I have confirmed this behavior exists in the following firmware versions:
r3-robot/tags/release-3.4.1:5858 CLEAN
r3_robot/tags/release-3.2.6:4975 CLEAN
|
I have a 18 V rated driver I'm using to drive two 12 V DC gear motors using my Arduino. I bought a new battery pack which is rated 3300 mAh 25C, 11.1V making the total current input 82.5 A. My driver is rated for 7 V min and 18 V max, no current rating is given.
My motors are 12V max current under load is 9.5 A.
So just to be sure, can using this battery destroy my motor driver?
This is the datasheet.
|
I know given the intrinsics fx, fy, cx, cy (where fx, fy are the horizontal and vertical focal length, and (cx, cy) is the location of principal point of the camera if pinhole camera model assumed) of an Kinect depth camera(or other range sensor?), a depth pixel px=(u, v, d) ((u, v) is the pixel coordinate, d is the depth value) can be converted to a 3D point p:
p=(x, y, z)
x=(u-cx)/fx*d
y=(v-cy)/fy*d
z=d
so that a depth image can be converted to a point cloud, and indeed, a depth Image represents a unique point cloud physically.
SLAM systems e.g. KinectFusion use such point clouds for ICP based registration to obtain camera pose at each time and then fuse new point cloud to the previously reconstructed model.
However, my mentor told me that depth Image cannot be inveribly converted to a point cloud since it's 2D->3D mapping with ambiguity (which I disagree), and he claims that I should use the depth Image at time (i-1) and (i) for registration, not the derived point cloud.
(If I have to obey my mentor's order) I've been reading papers and found one using Gradient Descent to solve camera pose (tx, ty, tz, qw, qx, qy, qz):
Prisacariu V A, Reid I D. PWP3D: Real-time segmentation and tracking
of 3D objects[J]. International journal of computer vision, 2012,
98(3): 335-354.
which uses RGB Images and a known model for pose estimation. However, I've NEVER found a paper (e.g., KinectFusion and other later RGB-D SLAM algorithms) deals with depth data just as plane image but not point cloud for registration. So could someone give me some hint (papers or opensource code) about:
How to do depth image registration without converting them to point clouds?
|
I am using a library precompiled on x86 on my pc (x86_64). Does there exist any toolchain to compile the x86 library and in the end generate an executable for armv7l ubuntu?
|
I am trying derive velocity from accelerometer (MPU9250 in sensor-tag board). I saw lot of blogs which talk about noise and exact estimation problems. I started seeing velocity derivation (integration of accelerometer data over time) yielding me towards ramp because of noise presence in MPU9250.
Is the velocity can be estimated only by accelerometer or we need assistance of another sensor such as GPS or gyroscope, etc..
Please let me know as I see my velocity calculations never converge at all.
Also I have limitation in compute power, so Kalman filter kind of estimation techniques is difficult to implement. Can you please suggest whether I am in right direction or not.
|
I am working on an Arduino based robot which engages a braking mechanism detecting anything in front. I was using an ultrasonic sensor, to detect obstacles which worked well while the robot was on my table (i.e under construction). But when I ran it on the ground, it doesn't stops and crashes.
The robot is programmed as if anything is detected 50 cm ahead if the robot, the braking mechanism stops the wheels. But when testing, the robot just wouldn't stop.
My robot is running at an average 7.5m/s . Thinking that doppler's effect might have rendered my sensor useless, I tried a little IR sensor I had lying around (range 25 cm approx), but that didn't work as well.
What am I doing wrong here?
|
I've implemented SMC (Sliding Mode Controller) on WMR in both X-Y and X-Z plane.
Now i want to combine both of these to control WMR in 3D. For this purpose I'm trying to use resultant vector of simulation in XY plane and track that resultant vector in XZ plane as value of X in previously designed code. Tracking control of resultant vector is shown in figure 1 while Vector sum decomposed in rectangular coordinates after simulation is shown in figure 2.
Am I going wrong?
What other tecniques can I apply to do 3D control of vehicle using Sliding Mode Controller.
Can i reduce the time delay offset? I've implemented right equations for SMC tracking controller equations but simulation does not gives exact results.These equations work well for control of vehicle in two dimensions (X-Z plane).
|
I have the mBot robot and I'm trying to get it to go to the other side of a cylindral obstacle.
Something like this:
What I know:
Radius of the cylinder - r
Robot's distance from the cylinder
Wheel thickness - 1.5 cm
Distance between the middle of each wheel - 11.5 cm
How would I achieve the above path?
The only thing I saw was this SO question that says:
The distance between the left and right wheel of the robot is 6
inches.
So the left wheel should travel at a distance of 2(pi)(radius+6)
And the right wheel should travel at a distance of 2(pi) (radius-6)
The problem with my robot is that you can't tell it to go 20cm to the right, nor can you tell it to turn 90 degrees to the right.
All you can do is set each motor's speed 0-255, so there's not way to put it in the formula disatance = time x speed.
I assume I have to set each motor's speed to a different value so they would go in a circle of radius x and then just exit at the half of the circle (like shown in the picture)
|
I want to write my own inverse kinematics solver, and I have been recommended to use Google's Ceres Solver to help. Now, according to the documentation, Ceres Solver is usually used for non-linear least squares problems (http://ceres-solver.org/nnls_tutorial.html). This minimises the sum of squared differences between the measured and target values, over all data. What I am confused about, is how this relates to inverse kinematics.
In inverse kinematics, with the example of a robot arm, the goal is to determine what joint angles the robot should be positioned in, in order to reach a target end effector pose. There exists a single equation which determines the end effector pose, given the set of joint angles, and we want to determine the parameters in that equation (the joint angles).
But how does this relate to the least-squares problem, where there are multiple measurements? Is the problem I am trying to solve essentially the same, except that the number of measurements is one? And in that case, is using Ceres Solver's non-linear least squares solver really necessary?
Thanks!
|
I am a beginner to ROS and I wanted to know if I could build a simple robot to learn ROS.
I currently have the following components available:
Arduino Uno
Simple two wheeled robot chassis
Some motors
L293D motor driver
Some ultrasonic sensors
Some infrared sensors
|
I have a differential equation that connects the "velocity" of a point in the FOV of a camera with the velocities of a robot's joints, that is $$\dot s=J(s) \dot q$$ where s is a vector with the $x$,$y$ coordinates of the point in the FOV, $J$ is the interaction matrix and $q$ is the vector of the joint positions.
If I have a certain point whose velocity I am tracking and this point remains in the FOV, then $\dot s$ is well defined. But if I change this point online, that is at the time instant $t$ I have point $s_t$ and at the time instant $t+dt$ I have the point $s_{t+dt}$, then $\dot s$ is not defined.
Can I create a filter to produce a continuous variation of $\dot s$? If not, what can I do?
More specifically, I want to perform occlusion avoidance. In order to do this I want to compute the minimum distance of each feature point of my target object from the possibly occluding object. But, obviously, this distance can be discontinuous due to the fact that another possibly occluding object can appear in the FOV nearer to my target than the previously measured.
|
With the problem of stabilising an inverted pendulum on a cart, it's clear that the cart needs to move toward the side the pendulum leans. But for a given angle $\theta$, how much should the cart move, and how fast? Is there a theory determining the distance and speed of the cart or is it just trial and error? I've seen quite a few videos of inverted pendulum, but it's not clear how the distance and speed are determined.
|
Apologies if this isn't really the right place to be asking, but I was wondering whether third party design firms are ever contracted to design industrial and or consumer robots?
If not is it something that is usually done in house, and who within an org would usually take care of this process?
Thanks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.