instruction
stringlengths 40
28.9k
|
---|
Looking at pictures of existing designs for quadropod robots, the servos in the legs seem to usually be mounted inside the chassis, with a second attachment at back of the servo as well, such as this:
rather than putting what looks like an asymmetrical load, like the knees here:
Is this for aesthetics or are there real structural reasons to minimize the lateral load on the axle on a robot of this size?
|
Good day, I have just finished tuning the Pitch and Roll PID's. I did this by setting the throttle such that the quad is weightless. I did the tuning of each axes separately.
I would just like to ask what is the best way to tune the PID for maintaining an altitude setpoint.
Is it best to turn off the Pitch and Roll PID controllers while tuning the altitude PID or is it best to have them already active while tuning the latter controller?
I am going to use a cascaded PID controller using the Velocity along the z-axis calculated from the accelerometer output for the inner PID loop (150Hz) and the altitude measurement of the HC-SRO4 ultrasonic sensor (20Hz) for the outer PID loop.
|
As a subtask inside a main project I need to compute the position (x,y,z) of a quadrotor using an homography.
To do this I will use a camera (attached to the quadrotor) pointing down to an artificial landmark on the floor. Basically I need to compute the extrinsic parameters of the camera to know the pose with respect to the landmark. I know the projective points of the landmark in the camera, and the intrinsic matrix of the camera but I also need the real landmark position [X, Y, Z].
I suppose that Z coordinate is equal to 0 because the landmark is plane, but I am not sure how to compute the real [X,Y] coordinates.
Any idea how to do that?
I am also interested in put the (x,y,z) position of the quadrotor into a control path, anybody knows where I can find info about the most common controllers for do this kind of task?
|
Quaternion has four parameters. Calculating Jacobian for inverse-kinematics, 3 positions and four quaternion parameters make Jacobian $7\times7$ instead of $6\times6$. How to reduce Jacobian to $6\times6$ when using quaternion?
|
I am working on a non-holonomic motion planning problem of a mobile robot in a completely unknown environment. After going through some research papers, I found that D-star algorithm is widely used in such conditions. But there are many D-star variants like Focused D*, D*-Lite, Field D* etc... So which of these variants is suitable in this case? Also please suggest any other better approach for this problem?
|
In my research project I deal with a mobile robot that perceives through stereo vision. As the stereo input data I currently use several datasets taken from a passenger vehicle that contain real world photos. The datasets are good to get started but have a limited content so I would need to model my own traffic situations to further work on the stereo vision system.
I am thinking about using some kind of synthetic graphics simulation as the input for the stereo system. What are my options? I can imagine a 3D graphics rendering engine whose output would be fed as the input for the stereo vision could probably be used.
I found there are general robotic simulators available like Gazebo but since I am all new to robotic simulation I do not really know where to begin.
EDIT:
I forgot to write that all my code is a pure C++. I use OpenCV and LIBELAS for stereo vision and Point Cloud Library (PCL) for visualization. All glued together into a single C++ project and compiles into single binary.
|
I would like to make a little survey regarding the (geo)spatial projection that you use when elaborating your GPS and movement data for the spatial awareness of your robots.
Moving all GPS coordinates to a planar projection seems to be the more reasonable choice since not only distances (for which several formulas and approximations exist), but bearings must be computed.
Generally, although scales are pretty small here, avoiding equirectangular approximation seems a good idea in order to keep a more consistent system.
Avoiding working in the 3D world (Haversine and other great-circle stuff) is probably a good idea too to keep computations low-cost.
Moving the world to a 2D projection is hence what seems to be the best solution, despite reprojection of all input GPS coordinates is needed.
I would like to get opinions and ideas on the subject
(...if ever anyone is interested in doing it U_U).
|
I have to simulate a pick and place robot (3 DOF). I tried with MATLAB. It should pick and place different objects according to their geometry.
Where can I find similar m-codes and algorithms?
|
I have a Kinect Sensor, and iPi software I use to create motion capture data to use in film editing. I am looking at creating a small, Raspberry Pi driven bipedal robot just for fun, and I was wondering if it was possible to use the MoCap to control the robot? It will only be 20-30 cm tall, with six servos (hips, knees, ankles). Is it possible to apply the movement from these six joints on the human body to my robot, like having a string directly from my left knee joint to its left knee servo? It could either be in real-time, like following my actions, or using pre-recorded data.
(NOTE: If needed, I can plug it directly to my Apple/Windows computer, if the Pi could not support this. Also, it will have no upper torso at the moment.)
|
I am doing a project on an automated grain dispenser system using a PLC control. I need a valve for dispensing grain from hopper to packet. I should be able to control the flow of the grain.
So what kind of valve should I use for flow control of the grain? There are different types of grains like rice, wheat, etc., and the valve should be controlled by the PLC (opening and closing of valve).
|
I've built a quadcopter with four brushless motors and ESCs (30A). I'm using an Arduino to control them. I haven't written any complex code; just enough to get them running. Everything is fine until I send a number over 920 to the serial. Then, for some reason, all the motors stop spinning. I'm using three freshly bought and charged LiPo cells (V = 11.1V). Here is the link for the site that I bought them from (I cannot seem to find any other resource about them) : 4x A2212 1000KV Outrunner Motor + 4x HP 30A ESC + 4x 1045 prop (B) Quad-Rotor.
When I tried turning on only one motor, I could write up to about 1800 microseconds, while both with 4 and with 1 motor, the minimum that it works is 800.
Can somebody explain why this happens and I how I can fix it?
Here is my code:
#include <Servo.h>
int value = 0;
Servo first,second,third,fourth;
void setup() {
Serial.begin(9600); // start serial at 9600 baud
first.attach(6);
second.attach(9);
third.attach(10);
fourth.attach(11);
}
void loop() {
first.writeMicroseconds(value);
second.writeMicroseconds(value);
third.writeMicroseconds(value);
fourth.writeMicroseconds(value);
if(Serial.available() > 0){
value = Serial.parseInt();
}
}
|
I'm having a hard time trying to understand how to obtain the dynamic model for a system similar to the image.
The balloon is a simple helium balloon, however the box is actually an aerial differential drive platform (using rotors). Now there's basically a model for the balloon and another for the actuated box. However I'm lost to how to combine both.
The connection between both is not rigid since it is a string.
How should I do it? Is there any documentation you could point me to, in order to help me develop the dynamics model for this system?
Since I'm so lost, any help will be useful. Thanks in advance!
|
What's the difference between an underactuated system, and a nonholonomic system? I have read that "the car is a good example of a nonholonomic vehicle: it has only two controls, but its configuration space has dimension 3.". But I thought that an underactuated system was one where the number of actuators is less than the number of degrees of freedom. So are they the same?
|
I am working on designing and building a small (1 1/2 lbs), 2-wheeled, differential drive Arduino-controlled autonomous robot. I have most of the electronics figured out, but I am having trouble understanding how much torque the motors will actually need to move the robot. I am trying to use the calculations shown here and the related calculator tool to determine what speed and torque I will need. I will be using wheels 32mm in diameter and one of Pololu's High-Power Micro Metal Gearmotors. I performed the calculations for a robot weight of 2 lbs to be safe and found that the 50:1 HP Micro Metal Gearmotors (625 RPM, 15 oz-in) should theoretically work fine, moving the robot at 3.43 ft/s with an acceleration of around 29 ft/s^2 up a 5-degree incline.
However, I have not found an explanation for several things that I think would be very important to know when choosing drive motors. When the robot is not moving and the motors are turned on at full power, they should need to deliver their full stall torque. Based on the calculations, it seems that any amount of torque can get the robot moving, but the more torque, the faster the robot's acceleration. Is this true? Also, if the power source cannot supply the full stall current of the motors, will the robot not be able to start moving? In my case, I am powering the robot through a 7.2V (6S) 2200mAh NiMH battery pack that can provide around 2.6A continuously, and when it does that the voltage drops to less than 1V. Will this be able to power my motors? Once the robot reaches full speed and is no longer accelerating, theoretically the motors will not be providing any torque, but I do not think this is the case. Is it, and if so, how will I know how much torque they will be providing? Will the motors I chose have enough torque to move my robot?
|
So I am planning on building a robot that turns on when it detects some kind of heat source, I am currently looking at thermal imaging cameras, but am not sure as to how to go about writing code to send a ping or some sort of message when the camera detects a heat source.
Does anyone know of any way to do this?
Thanks
|
I've started tinkering with a Create 2, but I'm having issues reliably getting it to accept my commands. I can occasionally get it right, but sometimes, it just seems to ignore me. I'm guessing my cleanup code isn't getting the state fully reset or something. Is there a good pattern to follow for fail-safe initialization code?
Here's what I'm doing right now:
Pulse BRC low for 1 second
Wait 1 second
Send 16x 0 bytes (to make sure if it's waiting for the rest of a command, this completes it - seemed to help a bit when I added this)
Send 7 (reset)
Wait 10 seconds
Send 128 (start)
Wait 2 seconds
Send 149 35 (ask for the current OI state)
Wait 1 second
Send 131 (safe mode)
Sometimes I'm then able to issue 137 (drive) commands and have it work. Most times it doesn't. The times when it doesn't, I'm seeing a lot of data coming from the Create 2 that I'm not expecting, that looks something like this (hex bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3f 2a ff 73 21 09 cc 0a 88 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3f 2a ff 73 21 09 cc 0a
There's more, but my logging cut it off. I get the same pattern a couple of times, and it seems to be at least partially repeating. I thought maybe it's the 16 0-bytes I sent followed by 003f 2aff 7321 09cc 0a88, but I still don't know how to interpret that.
Sometimes it will make some noise for the reset command, but usually ignores the start/safe mode commands completely (I can tell because the green light stays on).
|
I am designing an indoor autonomous drone. I am currently writing an object classification program in OpenCV for this purpose. My objects of interests for classification are: ceiling fans; AC units; wall and ceiling lamps, and; wall corners. I am using BoW clustering algorithm along with SVM classifier to achieve this (I'm still in the process of developing the code, and I might try other algorithms when testing).
The primary task of the drone is to successfully scan (what I mean by scanning is moving or hovering over the entire ceiling space) a ceiling space of a given closed region while successfully avoiding any obstacles (like ceiling fans, AC units, ceiling and wall lamps). The drone's navigation, or the scanning process over the ceiling space, should be in an organised pattern, preferably moving in tight zig-zag paths over the entire ceiling space.
Having said that, in order to achieve this goal, I'm trying to implement the following to achieve this:
On take off, fly around the given closed ceiling space and use SLAM to localise and map its environment.
While running SLAM, run the object classier algorithm to classify the objects of interests and track them in real time.
Once obtained a detail map of the environment and classified all objects of interest in the local environment, integrate both data together to form an unified map. Meaning on the SLAM output, label the classified objects obtained from the classifier algorithm. Now we a have fun comprehensive map of the environment with labeled objects of interest and real-time tracking of them (localization).
Now pick random corner on the map and plan a navigation pattern in order to scan the entire ceiling space.
So the question now here is, is using object classification in real-time will yield successful results in multiple environments (the quad should be able to achieve the above mentioned tasks in any given environment)?. I'm using a lot of train image sets to train my classifier and BoW dictionary but I still feel this won't be a robust method since in real-time it will be harder to isolate an object of interest. Or, in order to overcome this, should I use real-situation like train images (currently my train images only contain isolated objects of interests)?
Or in my is using computer vision is redundant? Is my goal completely achievable only using SLAM? If, yes, how can I classify the objects of interest (I don't want my drone to fly into a running ceiling fan mistaking it for a wall corner or edge). Furthermore, is there any kind of other methods or sensors, of any type, to detect objects in motion? (using optical flow computer vision method here is useless because it's not robust enough in real-time).
Any help and advice is much appreciated.
|
Assuming a quality industrial servo, would it possible to calculate the weight/resistance of a load? Maybe by comparing current draw in a holding position, or by the time it takes to lift/lower an object. Could it accurately measure grams, kilograms? What kind of tolerance could be achieved?
I'm trying to eliminate the need for a dedicated weight measurement sensor.
|
I have been stabilizing my quadcopter. I tuned my angle PIDies and my quadcopter tries to stabilize itself, but there is some overshooting. Which is, I think, due to gyro rates. I have read that we have to use two PIDies on an axis. I'm having problems to attach these two PIDies.
Can anyone help me in cascading angle PID and rate PID? Will I have to tune rate PID after tuning angle PID?
|
I plan to build a mechanism with multiple axis, which is similar to a robot. To start, I need to define some specifications such as repeatable precision, speed, acceleration, and payload. Then the motor and structure is selected and designed based on these parameters. After that, I need to choose methods to manufacture these components. I would like to consult experienced experts in this forum that is there any suggested books, textbooks, or website resources I can learn these knowledge?
|
I have constructed a 3 DOF robot arm. I want it to follow a trajectory on a 2D plane (XY). Tha shapes I want to follow are lines, cycles and splines. I now the math behind these 3 shaped (how they are defined). I have the kinematics, the inverse kinematics the jacobian and the whole control system (with the PID controller). The system receives as inputs, Xd(position), Xd'(velocity) and Xd''(acceleration) over time.
I found the following image that shows (more or less) my system.
So here is were I am stuck. How do I translate the shape to the position, velocity and acceleration that each joint needs to make so the end effector moves in the Cartesian space according to that shape?
|
I have recently started reading about PID tuning methods and algorithms, and I encountered the particle swarm optimization algorithm and genetic algorithm.
The problem is, that I don't understand how each particle/chromosome determines his fitness. On real physical system, each particle/chromosome checks his fitness on the system? Wouldn't it take a really long time? I think that I am missing something here... Can those algorithms be implemented on an actual physical system? If so, then how?
|
I want to simulate the detection of a moving object by a unicycle type robot. The robot is modelled with position (x,y) and direction theta as the three states. The obstacle is represented as a circle of radius r1 (r_1 in my code). I want to find the angles alpha_1 and alpha_2from the robot's local coordinate frame to the circle, as shown here:
So what I am doing is trying to find the angle from the robot to the line joining the robot and the circle's centre (this angle is called aux_t in my code), then find the angle between the tangent and the same line (called phi_c). Finally I would find the angles I want by adding and subtracting phi_c from aux_t. The diagram I am thinking of is shown:
The problem is that I am getting trouble with my code when I try to find the alpha angles: It starts calculating the angles correctly (though in negative values, not sure if this is causing my trouble) but as both the car and the circle get closer, phi_c becomes larger than aux_t and one of the alphas suddenly change its sign. For example I am getting this:
$$\begin{array}{c c c c}
\text{aux_t} & \text{phi_c} & \text{alpha_1} & \text{alpha_2} \\ \hline
\text{-0.81} & \text{+0.52} & \text{-1.33} & \text{-0.29} \\
\text{-0.74} & \text{+0.61} & \text{-1.35} & \text{-0.12} \\
\text{-0.69} & \text{+0.67} & \text{-1.37} & \text{-0.02} \\
\text{-0.64} & \text{+0.74} & \text{-1.38} & \text{+0.1} \\
\end{array}$$
So basically, the alpha_2 gets wrong form here. I know I am doing something wrong but I'm not sure what, I don't know how to limit the angles from 0 to pi. Is there a better way to find the alpha angles?
|
If I had a single stepper motor how could I use it to create a robotic clamp that could simply grab hold of something like a plank of wood and release it?
Are there any standard parts that I could use for this? I'm having trouble finding out what the names of the parts would be.
|
How accurate must my odometer reading be for SLAM ?
I am writing this extra section because it says my question body does not meet the quality standard.
|
I need to develop something in order to update some coordinates in a KUKA KR C4 robot predefined program.
After some research I've found some ways to do it, but all of them non free.
I had several options, like developing a HMI in the console with 3 buttons, to touch up the 3 coordinates that I have to update for example.
Sending a XML file would work too but I need a RSI connection, and I can't do it without proper software (I guess).
Do you know about something like this? Or a C++ library that allows me to have access the .src/.dat files or to create a new one with the same "body" but with different coordinates?
Summing up, imagine that I have a conveyor that carries boxes and I need to develop a pick and place program. So far so good. But every 100 boxes, the size changes (and I can't predict it). So the operator goes and updates the coordinates, but I want to make sure that he won't change anything else in the program. Any ideas?
|
If I need to fly a drone in strong winds, how can I stabilize it? Should I use accelerometers and gyroscopes to keep it steady? Or should I just use some flight technique under such circumstances?
|
I am designing a multi modal stent testing machine which will bend, twist, and compress stents (very thin, light, and fragile cylindrical meshes for in arteries) in a tube. The machine will operate at maximum 3.6 Hz for months at a time (> 40 million cycles). As the machine will be in a lab with people, the noise should be minimal. I am choosing actuators for my design but was overwhelmed by the range of products available.
For rotating the stents around their axis, I will need a rotary actuator with the following specs:
torque: negligible max angle: 20 deg
angular velocity needed: max 70 deg/s
hollow shafts are a plus
For compressing the stents, I will need a linear actuator with the following specs:
force: low (<1N)
max stroke: 20mm but if possible 70mm for allowing different stent lengths
stroke velocity needed: max 120mm/s
Price of these motors is not the driving factor.
I looked into stepper motors, servo motors, and piezoelectric motors. There seems to be s huge selection that fits my requirements. If all motor types have a reliability that suits my needs, which characteristics/advantages/disadvantages should I consider that determine the selection of suitable actuators? I do know what the difference is between the motor types, but there is a lot of overlap. Concrete suggestions are welcome.
|
I am trying to get my robot to drive straight and am having trouble. I find that when running the motors with no load they run fine. If I put a load on one motor it accelerates. The other performs as expected, it tries to maintain speed. I am running 393 motors with encoders and PID selected. I am running robot C.
See the following video: https://youtu.be/u3P0Wectwco
program is as follows;
#pragma config(I2C_Usage, I2C1, i2cSensors)
#pragma config(Sensor, dgtl12, killB, sensorTouch)
#pragma config(Sensor, I2C_1, , sensorQuadEncoderOnI2CPort, , AutoAssign )
#pragma config(Sensor, I2C_2, , sensorQuadEncoderOnI2CPort, , AutoAssign )
#pragma config(Sensor, I2C_3, , sensorQuadEncoderOnI2CPort, , AutoAssign )
#pragma config(Motor, port2, rmotor, tmotorVex393_MC29, PIDControl, reversed, driveRight, encoderPort, I2C_1)
#pragma config(Motor, port3, lmotor, tmotorVex393_MC29, PIDControl, driveLeft, encoderPort, I2C_2)
#pragma config(Motor, port4, topmotor, tmotorVex393_MC29, openLoop, encoderPort, I2C_3)
#pragma config(Motor, port5, pmotor, tmotorVex393_MC29, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard !!*//
void StopAll(){
motor[rmotor] = 0;
motor[lmotor] = 0;
motor[topmotor] = 0;
motor[pmotor] = 0;
}
//Stops the program at the push of a button
task eStop(){
while (SensorValue(killB) == 0){
wait1Msec(10);
}
StopAll();
stopAllTasks();
}
task main()
{
startTask(eStop);
nMotorEncoder[rmotor] = 0;
nMotorEncoder[lmotor] = 0;
motor[rmotor] = 15;
motor[lmotor] = 15;
wait1Msec(20000);
motor[rmotor] = 0;
motor[lmotor] = 0;
StopAll();
}
Thank you,
Mark
|
Why would a drone need a magnetometer? What would the drone do with this information? I think it would be to tell direction, but why would it need this if it has an accelerometer and a gyroscope?
|
I want to implement RRT for motion planning of a robotic arm. I searched a lot on the internet to get some sample code of RRT for motion planning, but I didn't get any. Can someone please suggest a good source where I can find RRT implemented in C++ for any type of motion planning.
|
I need my motor to be powered with 12V, 5A for 1 hour continuously. How can i decide the Ah rate of the battery. Please suggest some lithium ion battery for the specification
|
I'm implementing a set of loops to control pitch-and-roll angular positions.
In an inner-loop, motor speeds are adjusted to achieve desired angular rates of rotation (the "inner-loop setpoints").
An outer-loop decides these desired angular rates (the "inner-loop setpoints") based on the aircraft's angular positions.
Outer-loop
Frequency = ~400Hz
Outer PV = input angular position (in degrees)
Outer SP = desired angular position - input angular position (in degrees)
Inner-loop
Frequency = ~760Hz
Inner PV = input angular rotation (in degrees-per-second)
Inner SP = constant1 * Outer MV (in degrees-per-second)
PWM = Inner MV / constant2 (as percentile)
I understand what I-gain does and why this is important, but I'm not able to see any practical reason for also having I-gain specified in the outer-loop. Surely the inner-loop would compensate for any accumulated error, leaving no error to compensate for in the outer-loop, or is my thinking flawed?
Any example gain values to elaborate would be greatly appreciated.
|
I want to estimate the poses of a vehicle at certain key frames. The only sensor information I can use is from an IMU which yields translational acceleration and orientation measurments. I obtain a 7D pose, i.e. 3D position vector + unit quaternion orientation, if I integrate the translational acceleration twice and propagate the orientation measurements.
If I want to add a new edge to the graph I need a constraint for that edge. In general, for pose graphs this constraint represents a relational transformation $z_{ij}$ between the vertex positions $x_i$ and $x_j$ that are connected by the edge.
Comparing my case to the literature the following questions arised:
How do I calculate a prediction $\hat{z}_{ij}$ which I can compare to a measurement $z_{ij}$ when computing the edge error? Initially, I understood that graph slam models the vertex poses as gaussian distributed variables and thus a prediction is simply calculated by $\hat{z}_{ij}=x_i^{-1} x_j$.
How do I calculate the information (preferred) or covariance matrix?
How and when do I update the information matrices? During optimization? Or only at edge creation? At loop closure?
I read about the chi-square distribution and how it relates to the Mahalanobis distance. But how is it involved in the above steps?
Studying current implementations (e.g. mrpt-graph-slam or g2o) I didn't really discover how predictions (or any probability density function) is involved. In contrast, I was even more confused when reading the mrpt-graph-slam example where one can choose between raw poses and poses which are treated as means of a probability distribution.
|
I implemented a simulation for a robotic arm that has to grab things. This arm has a 6DOF structure and a simple gripper on the top. I made a simple CCD IK algorithm to control the arm. I can use it in two ways:
Compute the position of the last joint of the arm before the hand
part (which means 1 end-effector). Then use an analytical method to
place the hand in a good orientation.
Compute directly the arm, and the hand position by giving the CCD IK algorithm 2 end-effectors that are the 2 finger of the hand.
What is the most used method for a grabbing arm robot ? I'm not willing to find a solution, just to know what people usually do.
|
I use gazebo to simulate a robot arm. To control its joints, I use PID controllers. As you might know, PID are sometimes pretty hard to tune and this is the case for a robotic arm. To avoid any tuning, and because I don't need the PID values to be realistic, I set to zero the derivative and integral parameters, increase a lot the proportional gain and add a lot of damping in my joints. By doing this, I can get a well working arm but only if I disable the gravity.
My question is the following. Do you have an idea how I could simulate a very strong actuator with not necessarily realistic parameters?
EDIT 1: Setting the integral and derivative gain is stupid. The integral gain helps in correcting the effect of the gravity. The derivative gain counters the loss of stability and speed due to the integral gain.
This question somehow leads to another. Do you know what tuning do the robotic arm manufacturer (big arms for car industry for example). I guess that this arm use actuators with a very strong torque and a low maximum speed which reduces the need of tuning.
EDIT 2: More info on my setup. I use gazebo 6, with ODE. The robot description is in SDF. I control the robot with a model plugin. As a PID controler I use the PID class from the common library of gazebo and get directly the JointControler associated to the model.
Let say that I would like actuators very robust without any tuning needed. This way I could have a simulation WITH dynamics (by opposition to the SetPosition method). Do you think it is possible ?
|
there is an app called SERIAL, available in the app store.
I've downloaded it on my mac and am experimenting with it, any ideas on how to send create2 OI commands using "Serial"?
so far it seems a handy app, I've bypassed all the need for other drivers. anyone else use SERIAL/something of the like?
*when the SERIAL terminal is open and the number 9 is pressed on my mac it seems to activate cleaning mode. thats all the communication I'm getting after hours of playing around in python and mac terminal.
|
What is the equivalent code of "env.CheckCollision(robot)" in C++? Even though it is said that conversion of commands from python to c++ is easy and intuitive, where can I find a proper documentation for this conversion?
|
I've been looking for large robotic arms (with two fingers) and the arm so they are able to pick up and drop things in a space around the arm (and even spin around the 'wrist').
I'm not sure what the terminology is for such an arm. I've seen this, OWI-535 Robotic Arm Edge, and it looks close. Is there something larger that can be hooked up to a Raspberry Pi instead of the remote controller?
Is there a particular term for this in a generic context? Or is there a way to build such an arm using off the shelf parts?
|
I am part of my College team which is planning to enter a Mars Rover Challenge. In the point of view of a programmer, where should I start? I know C is the main language NASA used for their Rover and I have a basic understanding of it. Plus, how much should I look into the RTOS part for making a rover?
Any books/links to this topic would be greatly appreciated.
|
I refer to these types of brackets as servo brackets, or robot brackets:
I know that the two specific brackets, shown above, are known as a short-U (some vendors refer to them as "C", en lieu of "U") and a multi-function bracket, respectively, and that there are other types available, namely:
Long U bracket
Oblique U bracket
i bracket
L bracket
etc.
However, I am sure that there is a correct name for these types of bracket (or this range of bracket, if you will), rather than just servo brackets - either a generic name or a brand name. I have seen the term once before, on a random web page, but the name escapes me. They are either named after their creator, or, if I recall correctly, the institution where they were developed.
Does anyone have a definitive answer, preferably with a citation or web reference, or a little historical background?
|
I am currently implementing an autonomous quadcopter which I recently got flying and which was stable, but is unable to correct itself in the presence of significant external disturbances. I assume this is because of insufficiently tuned PID gains which have to be further tweaked inflight.
Current progress:
I ruled out a barometer since the scope of my research is only indoor flight and the barometer has a deviation of +-5 meters according to my colleague.
I am currently using an ultrasonic sensor (HC-SR04) for the altitude estimation which has a resolution of 0.3cm. However I found that the ultrasonic sensor's refresh rate of 20Hz is too slow to get a fast enough response for altitude correction.
I tried to use the accelerations on the Z axis from the accelerometer to get height data by integrating the acceleration to get velocity to be used for the rate PID in a cascaded pid controller scheme. The current implementation for the altitude PID controller is a single loop pid controller using a P controller with the position input from the ultrasonic sensor.
I had taken into account the negative acceleration measurements due to gravity but no matter how much I compute the offset, there is still the existence of a negative acceleration (eg. -0.0034). I computed the gravitational offset by setting the quadcopter to be still on a flat surface then collecting 20,000 samples from the accelerometer z axis to be averaged to get the "offset" which is stored as a constant variable. This variable is then subtracted from the accelerometer z-axis output to remove the offset and get it to "zero" if it is not accelerating. As said in the question, there is still the existence of a negative acceleration (eg. -0.0034). My quad then proceeds to just constantly climb in altitude. With only the ultrasonic sensor P controller, my quad oscillates by 50 cm.
How can this consistent negative acceleration reading be effectively dealt with?
Possible Solution:
I am planning to do a cascading PID contoller for the altitude hold with the innerloop (PID controller) using the accelerometer and the outer loop (P controller) using the sonar sensor. My adviser said that even a single loop P controller is enough to make the quadcopter hold its altitude even with a slow sensor. Is this enough? I noticed that with only the P gain, the quadcopter would overshoot its altitude.
Leaky Integrator: I found this article explaining how he dealt with the negative accelerations using a leaky integrator however I have a bit of trouble understanding why would it work since I think the negative error would just turn to a positive error not solving the problem. I'm not quite sure. http://diydrones.com/forum/topics/multi-rotors-the-altitude-yoyo-effect-and-how-to-deal-with-it
Single loop PD controller with the ultrasonic sensor only:
Is this feasible using feedback from a slow sensor?
Sources:
LSM303DLHC Datasheet: http://www.st.com/web/en/resource/technical/document/datasheet/DM00027543.pdf
Leaky integrator: http://diydrones.com/forum/topics/multi-rotors-the-altitude-yoyo-effect-and-how-to-deal-with-it
ArduPilot PID Loop: http://copter.ardupilot.com/wp-content/uploads/sites/2/2012/12/Alt-Hold-PID-version-3.0.1.jpg
|
I am a Computer Science major and I only have basic ideas on Robotics. I am planning to build a stationary cubical AI.
The main purpose of this bot will be that, it will have a sensor to check if the door has been opened and immediately asks a question "who has opened the door?" I also want it to recognize the correct words to interact the word, I am not talking about voice recognition but word recognition so that who ever speaks the correct words(words in bot's memory) can interact with it. Depending on who opens the door(prolly my family) I want it to speak out different things. I want it to respond to simple questions like, "what is the date and time?" , " a random qoute or a fact or a joke".
Is this too hard to achieve? Could anyone give me a basic idea on how to approach this project?
|
I am building a quadcopter for my school project. I am trying to program my own flight controller using PID algorithm.
I'll try to make my question simple using as an example below only two motors
1-----------2
Let's say I am trying to stabilize my two motor system using gyro from the diagram below to one above
1--
-----
----2
Using the formula Output = (gyro - 0) * Pgain
Do I need to increase the output only on the motor 2 or would I have to do both:
increase the output on the 2nd motor while decreasing the output on the first motor? Thank you
|
Is there any theoretical principle, or postulate, that states that the controlling system has to be more complex than the system being controlled, in any formal sense of the notion "complex"?
|
I am thinking about working on alternative drone controllers. I am looking into making it more easy to use and a natural feel (debating between sensor bracelets, rings, etc.).
The main issue I have is, I've been looking over all the standard RC transmitters that are used to control RC aircraft, but I am not sure what technology is inside of them, what kind of ICs they use for the actual RC signals.
I want more information on how to make an RC transmitter myself, mainly the protocol that's used to send messages, and what circuitry is needed to actually transmit that, what kind of components do I need and how should I implement the software?
I was aiming at doing this as a side project (hobby), but now I have the chance to use it as a uni project as well, so I'd like to give it a shot now, but I lack the proper information before getting started.
I'd rather not take apart my current RC controller and use an oscilloscope to decode the protocol.
Any answers (short or long) and reading material is appreciated.
Other questions, can the protocol be implemented in software on an embedded system (Raspberry Pi, Arduino, Intel Galileo, etc.)?
I am asking this because the frequency for these are 2.4 GHz.
This is part of a bigger project, drone related currently, and I could use alternative methods of sending the information, through other wireless means, as the first prototype, suggestions are welcomed.
Need: aircraft RC transmitter protocol info, RC transmitter components & schematics, anything else that might help with the transmission side
|
I'm trying to attach a small piece of sheet steel (30mm x 50mm x 1mm) to a small piece of nylon (50mm x 50mm x 4mm). Does anyone know how they could be fastened using small screws (
Any thought appreciated.
|
As we all know fixed wing vehicles are designed to have inherent instability which is what enables all fixed wing vehicles to fly.
However does this apply to all cases?
Do inherently unstable systems desire to be stable for all cases when a closed loop control is implemented on them?
|
Is there any open interface access to the new Braava jet just to drive it around?
|
I have read that certain iRobot products support or can be hacked to support something close to the open interace. There is even a book about hacking Roomba. What Robots have this capability?
|
I installed Multiple versions of APM (2.0.7, 2.0.17, 2.0.18) on Windows 7, Ubuntu 14.04, and OSX 10.11. I could connect to my ArduPilot but could not install firmware. Here's the error I would get:
Started downloading http://firmware.diydrones.com/Copter/stable/apm2-hexa/ArduCopter.hex
Finished downloading /var/folders/r4/s_j4c02s3wvcx6wy41__rnwh0000gp/T/APM Planner.uq1800
Opening firmware file...
Unable to open file: /var/folders/r4/s_j4c02s3wvcx6wy41__rnwh0000gp/T/APM Planner.uq1800
|
From a gyroscope I'm getting angular velocities [dRoll, dPitch and dYaw] as rad/s, sampled at intervals dt = 10ms.
How do I calculate the short term global orientation (drift ignored) of the gyroscope?
Pseudo code would be helpful.
|
I was reading an article on Euler-Lagrange systems. It is stated there that since M(q) and C(q,q') depend on q, it is not autonomous. As a result, we cannot use LaSalle's theorem. I have uploaded that page of the article and highlighted the sentence. (ren.pdf)
Then, I read Spong's book on robotics, and he had used LaSalle's theorem. I am confused. (spong.pdf)
I did some research, and found out that non-autonomous means it should not explicitly depend on the independent variable. Isn't independent variable time in these systems? So, shouldn't they be considered autonomous?
|
Let's say I have an industrial sized 6DOF robotic arm. I want to control each one of the six joints despite the non-linearity produced by the chain structure, the gravity and the weight of the loads it could lift.
I don't focus here on the speed nor the power limitations, I just want the arm to respond well. Moreover, I would like to avoid the use of any prior knowledge such as inertial computation. Then I had these considerations, considering that I can play with both the actuator design, and the loop feedback control system:
Limit the maximum speed of each actuator to smooth their error variation.
Increase the damping of the actuators to avoid high frequency instability.
Find a good control system, such as a PID, to make sure the targets are reached without oscillations.
Do you have any other considerations in mind? Do you know what process(es) industrial designers follow?
EDIT: As it is said in the comments, my question concern the design of an adaptive controller for a robot arm, which is, how to design a joint control system (actuator + loop control) that don't need inertia and masses to be computed (the controller could adapt to its own structure, or to the loads it lifts).
I'll be very much interested if you know some paper about adaptive control in the field of robotic arms.
|
My project requires a DC motor for mobility, very similar to an RC car. If precision isn't critical, can I use a solid state relay instead of a motor driver? If the vehicle moves an extra inch on the ground, I don't really care.
|
There are tons of cameras in devices around us these days. There are used photo cameras, smartphones, tablets at my home gathering dust.
I wonder, what the easiest way could be to get a camera module from some device, connect it to my robot project with a soldering iron and make it work from the software point of view. I am planning to use something like Arduino, an STM32 platform, or probably Intel Edison.
May be some camera modules are easier to solder and program for a custom project? Or shouldn't I look this way and better find a camera module that is specially designed for custom projects?
|
What is the difference between g-value and rhs-value of Lifelong Planning A* algorithm?
According to this link, D* Lite, g(s) directly correspond to the
g-values of an A* search, i.e. g(s) = g(s') + c(s',s), and rhs(s) is given as
$$
rhs(s) = \begin{cases}0 & s = s_{start} \\ \min_{s'\in Pred(s)}(g(s') + c(s', s)) & \text{otherwise} \end{cases}
$$
where, Pred(s) denotes the set of predecessors of node 's'.
Thus, unless node 's' has more than one predecessor, its g-value and rhs-value will remain same.
So, my question is, in which case will the rhs-value and g-value of a node be different?
|
I am new here and i am new to neural network also. :P
I have gone through the concepts of Neural Networks but i want to implement it in my project including microcontroller MSP430G2553 on LaunchPad Series.
I am using some sensors and i want to use some neural network code to manipulate the data from sensors to get some threshold.
I went through this post and tried to implement the codes from the link given but it is giving some error on less ram, i guess it is due to my mcu.
So, i wanted some help regarding the neural network code or library for Energia which i should use.
Thanks in Advance.
|
In propellers as the airspeed increases thrust decreases. Is the air speed component taken as a vector quantity perpendicular to the propeller? If thats true the its quiet easy to visualize in case of airplanes but for quadcopters will it be "copter_airspeed * sin(copter tilt)"?
|
I developed an anthropomorphic arm (structure in aluminium) with 6 DOF (3 plus spherical wrist) for direct kinematic.
I chose magnetic rotary encoders to measure angles but I am not satisfied, due to them causing noise on angle measurements.
What do you advise me?
To add another sensor and perform a sensor fusion?
To replace magnetic encoders with optical ones?
or... what else?
|
Following is the equation of Writhe matrix from the article Topology based Representation(page no. 6).
What is the meaning of 'sign' in the second part of this equation? I am not sure if this is some typo in that article as the other article of Hierarchical Motion Planning(page no. 3), compleletely neglects the term 'sign[...]'
|
In the article of Topological Based Representation(Page no. 12), the equation of the Linear Gaussian system dynamics is given as
In above equation what is the meaning of 'curly N'?
|
The paper Topology-based Representations for Motion Planning
and Generalisation in Dynamic Environments with
Interactions by Ivan
et.al., says on page 10 that the Approximate Inference Control (AICO) framework translates the robot dynamics to the graphical model by the following equation:
What does p(x0:T,u0:T) mean? I feel that p means 'prior of' some uncertain quantity, but I'm not sure about this.
|
I just got a new iRobot Create 2. I used to use an Element Direct BAM (Bluetooth Adapter Module) for iRobot Create previously.
How can I communicate with a Create 2 using Bluetooth? What accessories do I need?
|
I have some questions regarding an IPS autonomous robot system,
Configuration:
Mounting a camera to the ceiling of a room
Assume the room is a cube of 5mx5mx5m (LxWxH)
Assume the camera is Microsoft LifeCam Studio (CMOS sensor technology, Sensor Resolution: 1920 X 1080, 75° diagonal field of view, Auto focus from 0.1m to ≥ 10m, Up to 30 frames per second, Frequency Response: 100 Hz – 18 kHz)
A rover
Objectives:
By putting the rover in an unknown location (x,y) in the room, the system should localize the rover's position
After the rover's coordinates will be known, Navigation will be the next step
We want the rover to navigate from the known coordinates (x1,y1) (let's say point A) to another point B on the map (x2,y2)
Control signals will be sent to the rover's servos to complete the navigation task
Methodology:
Camera will capture the environment in real time
Environment will be represented as cells (occupancy grid mapping)
Assume each cell represents 5 cm in the environment
Rover will be localized by the system point A
Determine the navigation to point B
Determine the path of the rover in the grid map (ex: go x cells horizontal then y cells vertical)
Control Signal will be sent to rover's servos
Questions:
Can I use this camera for this task or I need another type of cameras ?
What are the factors affecting the system accuracy ?
(ex: Sensor Resolution - FOV - FPS - Frequency Response - Distance of the camera in the ceiling)
What's is the most important factor to consider to increase the accuracy ?
I would appreciate any opinions regarding the project
King Regards,
Thank you
|
I'm wondering if there is a way to figure out the actual controllers used in the commercial drones such as AR drone and Phantom. According to AR drone SDK, users are not allowed to access the actual hardware of the platform yet they are only capable of sending and receiving commands from/to the drone.
Edit:
I'm hoping to to check the actual controller utilized in the software. When I fly AR drone, it seems the platform can't stabilize itself when I perform aggressive maneuvers, therefore, I can guess that they use linearized model which is applicable for using simple controllers such as PD or PID
|
The article of Topology-based representation (page no. 13, line 5) says that, topology-based representation is invariant to certain changes in the environment. That means the trajectory generated in topology-based space will remain valid even if there are certain changes in the environment. But how is this possible? Is there any simple example to understand this concept?
|
We are making a project in which we want to count the no. of people entering and leaving a room with one single entrance. We are using IR sensors and detectors for this ,along with an Aurdino. We have a problem in this system, i.e when two or more persons are entering or leaving the room at a time we are getting a wrong count. Thanks in advance for your valuable time and solution.....If there is any other better way,please state that.
|
I'm trying to implement a PID control on my quadcopter using the Tiva C series microcontroller but I have trouble making the PID stabilize the system.
While I was testing the PID, I noticed slow or weak response from PID controller (the quad shows no response at small angles). In other words, it seems that the quad's angle range has to be relatively large (above 15 degrees) for it to show a any response. Even then, the response always over shoots no matter what I, D gains I choose for my system. At low P, I can prevent overshoot but then it becomes too weak.
I am not sure if the PID algorithm is the problem or if its some kinda bad hardware configuration (low IMU sample rate or maybe bad PWM configurations), but I have strong doubts about my PID code as I noticed changing some of the gains did not improve the system response.
I will appreciate If someone can point out whether i'm doing anything wrong in the PID snippet for the pitch component I posted. I also have a roll PID but it is similar to the code I posted so I will leave that one out.
void pitchPID(int16_t pitch_conversion)
{
float current_pitch = pitch_conversion;
//d_temp_pitch is global variable
//i_temp_pitch is global variable
float pid_pitch=0; //pitch pid controller
float P_term, I_term, D_term;
float error_pitch = desired_pitch - current_pitch;
//if statement checks for error pitch in negative or positive direction
if ((error_pitch>error_max)||(error_pitch<error_min))
{
if (error_pitch > error_max) //negative pitch- rotor3&4 speed up
{
P_term = pitch_kp*error_pitch; //proportional
i_temp_pitch += error_pitch;//accumulate error
if (i_temp_pitch > iMax)
{
i_temp_pitch = iMax;
}
I_term = pitch_ki*i_temp_pitch;
if(I_term < 0)
{
I_term=-1*I_term;
}
D_term = pitch_kd*(d_temp_pitch-error_pitch);
if(D_term>0)
{
D_term=-1*D_term;
}
d_temp_pitch = error_pitch; //store current error for next iteration
pid_pitch = P_term+I_term+D_term;
if(pid_pitch<0)
{
pid_pitch=(-1)*pid_pitch;
}
//change rotor3&4
pitchPID_adjustment (pid_pitch, 'n'); //n for negative pitch
}
else if (error_pitch < error_min) // positive pitch- rotor 1&2 speed up
{
P_term = pitch_kp*error_pitch; //proportional
i_temp_pitch += error_pitch;
if (i_temp_pitch < iMin)
{
i_temp_pitch = iMin;
}
I_term = pitch_ki*i_temp_pitch;
if(I_term > 0)
{
I_term=-1*I_term;
}
D_term = pitch_kd*(d_temp_pitch - error_pitch);
if(D_term < 0)
{
D_term=-1*D_term;
}
d_temp_pitch = error_pitch;
pid_pitch = P_term+I_term+D_term;
if(pid_pitch<0)
{
pid_pitch=(-1)*pid_pitch;
}
print(pid_pitch);//pitch
printString("\r\n");
//change rotor1&2
pitchPID_adjustment(pid_pitch,'p'); //p for positive pitch
}
}
}
void pitchPID_adjustment(float pitchPIDcontrol, unsigned char pitch_attitude)
{
if (pitchPIDcontrol>(maximum_dutyCycle-set_dutyCycle))
{
pitchPIDcontrol=maximum_dutyCycle-set_dutyCycle;
}
switch (pitch_attitude){
//change rotor1&2
case 'p': //positive status
PWM0_2_CMPA_R += (pitchPIDcontrol);//(RED)//motor1
PWM0_0_CMPA_R += (pitchPIDcontrol);//(Yellow)//motor2
break;
//change rotor 3&4
case 'n': //negative status
PWM0_1_CMPA_R += pitchPIDcontrol;//(ORANGE)//motor3
PWM1_1_CMPA_R += pitchPIDcontrol;//(green)//motor4
break;
}
Also, can someone please tell me how this motor mixing works?:
Front =Throttle + PitchPID
Back =Throttle - PitchPID
Left =Throttle + RollPID
Right =Throttle - RollPID
vs what I did in the function:
void pitchPID_adjustment(float pitchPIDcontrol, unsigned char pitch_attitude)
|
I want to issue two slightly different drive commands what is the smallest loop rate that the robot-create accepts new commands?
I know from reading the documentation that it appears the sensors are read every 15ms.
Not sure what the command rate is?
|
I want to start designing an Arduino project and have telemetry readings that indicate tilt or angle of placement.
Would an accelerometer be the best for determining tilt? Are there good tutorials?
|
I'm trying to make decisions for motors on a robot build. I keep running across CIM Motors. What is a CIM Motor? Where does the designation CIM come from? What does CIM mean?
|
I am going through a paper, Kinematic Modelling and Simulation of a 2-R Robot Using SolidWorks and Verification by MATLAB/Simulink, which is about a 2-link revolute joint robotic arm. According to the paper, the trajectory analysis of the robot was done via simulations in MATLAB/Simulink.
It shows the following picture, Trajectory generation of 2‐R robot with MATLAB/Simulink:
and then, Simulink - Simulation block to calculate the trajectory:
I think this is done in SimMechanics, but I am not sure. Experienced users, can you please tell me what am I looking at and how can I reproduce this?
|
I'm trying to simulate a humanoid robot using Gazebo with plugins. Since our actual model uses Dynamixel motors, I'd like to know how exactly they work to make the simulation as realistic as possible.
Gazebo offers two options to control joints. One is a PID controller, provided by the JointController class. The other way is to directly set a torque to the joint. (The PID method too is ultimately implemented using torques).
Currently, I'm trying the PID-based implementation. I've used a P-only controller with damping on all joints (I've had to guess both values). However, there is a large amount of noise, and the difference between actual and desired position is at times as much as 10-12 degrees (especially when the foot of the robot hits the ground).
Does the actual motor use a PID controller as well? I can't seem to find the details here, Dynamixel EX-106 User's guide, but this link, Dynamixel EX-106+ Robot Actuator mentions "Compliance/PID : Yes".
If the motor does use a PID controller, then what are the parameters? And how does it allow us to set moving speed then?
If the motor doesn't use a PID controller, then what is the pattern of torque provided? In the manual (first link), I found this
From the current position 200 to 491 ( 512-16-5=491 ), movement is
made with appropriate torque to reach the set speed; from 491
to 507 ( 512-5=507 ), torque is continuously reduced to the Punch
value; from 507 through 517 ( 512+5=517 ), no torque is generated.
This is rather vague though, and no further details are provided.
Also, I'm aware that extremely high damping and extremely high P-values might do the trick. But I want to simulate what actually happens on the motors, and that is probably not the way to go.
I'd appreciate it if anyone has any idea of what Dynamixel servos do, or examples of simulated Dynamixel motors anywhere else.
|
I'm trying to find the transfer function of a quadrotor with two controller loops, following next structure:
I know how to calculate the attitude stability controller, which relate rotor speed and desired angles. However, I have no clear at all how to implement the translational controller transfer function, whose output is the desired angle that the rotors must achieve considering the position I want to translate.
Considering that two controllers are PD, how can you calculate the translational controller transfer function and include it in the system? Time domain equations in the outer loop are next, where U terms relate to the thrust axis components. Thanks
|
There is a message system which does not appear to be document in the OI spec. This appears to be a a canonical terminal type serial interface in which messages come back such as firmware version and stuff. I am not sure how to determine what the end of this type of message is? It is a fixed number of end lines? or Bytes. One message seems to indicate STR730 which would be a 730 byte string.
The open interface spec seems to indicate a non canonical interface spec in which you read a fixed number of bytes with no processing of end lines. Is this correct?
|
I have been looking for a cheap ultrasonic sensor that is not blind under +/-30 cm but the only sensors I could find use the following shape, which is not suitable for my project (because of the robot design that only has 1 hole, and not 2..) :
Is there any chance to find a sensor with that other shape with a range starting around 5cm ?
Actually I am wondering if that 2nd shape makes this constraint mandatory or if I just did not found the appropriate product.
|
I want to convert an electric ATV (quad) for kids (like the HIGHPER ATV-6E) to radio control for a robotics project. These small ATVs are about a meter long and weigh about 40 kg. I need to choose servo motors for steering and braking. What grade servos do I need and how much torque do they need to have? Can I use the strongest RC servo I may find (like this 115kg/cm one or maybe even more, with metal gears of course) or do I need an "industrial grade" servo?
I plan to use one servo for steering and one for braking. For braking the ATV has mechanical disc brakes - two discs in the front and one common disc in the rear (there are two brake levers - front/rear). I plan to use only one servo and use it either for front or for rear. The plan is to mount the brake wire to the servo which would "simulate" the lever movement.
I guess I could also make a "weak" servo stronger by adding a proper gear, but I am not really into mechanical engineering much and would prefer an off-the-shelf component.
|
Is it possible to downgrade from ROS Jade to Indigo?
For those who are not yet familiar with Robot Operating System (ROS), here: ROS
|
In the D*Lite algorithm, described in line 21 of Figure 3, on page 4, in D* Lite, the main() starts with defining $s_{last}=s_{start}$. But value of $s_{last}$ is never updated in the entire algorithm.
So what is the purpose of defining this term and what does it mean?
|
I would like to mechanically measure the distance a kids electric ATV traveled. The ATV will not be used with kids but as a mobile robot instead. It has a common rear axle for both rear wheels which I think could be a good place to put an odometer on (since the chance both wheels will slip should be minimal). Regarding suspension it has a single shock for rear axle.
My plan is to put a bigger gear on the axle itself and then add a smaller gear to it on which some kind of sensor would measure number of its rotations. One rotation of the axle may be something like 20 rotations of the small gear. What kind of sensor can I use for sensing rotation?
Another way of making an odometer may be some kind of optical solution (disc with holes and an optical sensor) but this seems to be rather complicated and also the the direction of travel could not be easily estimated (unless the motor is running in some direction).
I just found a term called Wheel Speed Sensor which looks interesting and seems to employ primarily non-contact sensing (which is definitely better than mechanical gears). Rather then optical solution I like the Hall effect sensor solution which may be simple and mechanically robust. But still, my question is open on how to implement this...
I would like to use the odometer for both speed estimation and distance estimation. I need to read the sensor from C/C++ on a Linux box.
EDIT: The thing I am looking for is probably correctly called a rotary encoder or a wheel encoder.
The ATV may look like one of these:
|
I am having an issue with some hand-eye calibration.
So i am using a simple robot which at its tool point has an stereo camera mounted on it.
I want to perform some visual serving/tracking based stereo images extracted from the camera in the "hand". The camera provides me x,y,z coordinates of the object I want to track.
I can at all time extract an homogenous transformation matrix from base to tool (not cam) as (T_tool_base).
Firstly... I guess i would need perform some form of robot to (vice versa) camera calibration, My idea was that would consist of something like this
T_base_world = (T_base_tool) (T_tool_cam) (T_cam_world)
Where the T_tool_cam would entail the calibration... since the camera is at the tool point, would that entail the T_tool_cam should entail information on how much the camera is displaced from the tool point, and how it is rotated according to the tool point? or is not like that?
secondly... How do i based purely x,y,z coordinate make an homogeneous transformation matrix, which includes an rotation matrix ?
thirdly.. Having a desired Transformation matrix which in theory this
T_base_world = (T_base_tool) (T_tool_cam) (T_cam_world)
would provide me, would an inverse kinematics solution provide me with one or multiple solution?... In theory should this only provide me one, or what?
|
In optimized D*Lite algorithm as shown in the figure below (page 5, of the paper D*Lite), when the procedure ComputeShortestPath() is called for the first time in line 31, U(list of inconsistent vertices) contains only goal vertex ($s_{goal}$). Thus in the procedure ComputeShotestPath()(line 10-28), $u = s_{goal}$. And as, $k_{old}=k_{new}$ (because $k_m=0$), condition $k_{old}\leq k_{new}$ is satisfied and $u = s_{goal}$ is again inserted in U with same value of $k_{old}=k_{new}$. Thus, it seems that line(11-15) will run forever, and the algorithm will not be able to find the shortest path from goal to start.
I know that this algorithm has been widely used and I am failing to understand it. But where am I going wrong?
|
I'm working on an autonomous quad copter, I have two GPS co-ordinates (source and destination co-ordinates). I need to move my quad from the source to the destination, for this I need to calculate the heading and set the yaw value of my quad. How can I calculate the heading and make sure the quad is headed in the right direction as the target co-ordinates?
If I use magnetometer the declination angle will vary from place to place and so I will have to keep changing the declination angle. If I'm calculating based on just the GPS co-ordinates, it's not accurate.
What is the best way to do this? How do I calculate the above?
|
I have been stuck on this for weeks, I really hope that someone can help me with this,thank you in advance.
I am trying to write an IMU attitude estimation algorithm using quaternion kalman filter. So based on this research paper: https://hal.archives-ouvertes.fr/hal-00968663/document, I have developed the following pseudo code algorithm:
Predict Stage:
Qk+1/k = Ak * Qk; where Ak contains the gyro measurement.
Pk+1/k = Ak * Pk *Ak.transpose() + Q; where Q is assumed to be zero.
After prediction, we can use this formula to get the supposed gravity measurement of accelerometer Yg in body frame :
Yg = R * G; // R is the rotation matrix generated from quaternion Qk+1/k and G = (0,0,0,9.81).
This equation then translates to the following equation which allows me to get measurement model matrix H.
H * Qk+1/k = 0; //where H stores value related to (Yg-G).
Update Stage:
K = P * H * (H * P * H.transpose()+R)^(-1); //R should be adaptively adjusted but right now initialized as identity matrix
Qk+1/k+1 = (I-KH)Qk+1/k;
Qk+1/K+1 = (Qk+1/K+1)/|Qk+1/k+1|; //Normalize quaternion
Pk+1/K+1 = (I - KH)Pk+1/k;
The following is the main part of my code. The complete C++ code is at here https://github.com/lyf44/fcu if you want to test.
Matrix3f skew_symmetric_matrix(float a, float b, float c, float d){
Matrix3f matrix;
matrix << a,d*(-1),c,
d,a,b*(-1),
c*(-1),b,a;
return (matrix);
}
void Akf::state_transition_matrix(float dt,float gx,float gy, float gz){
Vector3f tmp;
tmp(0) = gx*PI/180;
tmp(1) = gy*PI/180;
tmp(2) = gz*PI/180;
float magnitude = sqrt(pow((float)tmp(0),2)+pow((float)tmp(1),2)+pow((float)tmp(2),2));
/*q(k+1) = | cos(|w|*dt/2) | quaternion_multiply q(k)
| w/|w|*sin(|w|*dt/2) |
*/
//w/|w|*sin(|w|*dt/2)
tmp = tmp/magnitude*sin(magnitude*dt/2);
//quaternion multiplication
A(0,0) = cos(magnitude*dt/2);
A.block<3,1>(1,0) = tmp;
A.block<1,3>(0,1) = tmp.transpose()*(-1);
Matrix3f skew_symmetric;
skew_symmetric = skew_symmetric_matrix((float)A(0,0),(float)tmp(0),(float)tmp(1),(float)tmp(2));
A.block<3,3>(1,1) = skew_symmetric;
}
void Akf::observation_model_matrix(Vector3f meas){
Vector3f G;
Vector3f tmp;
G << 0,0,9.81;
/* H = | 0 -(acc-G).transpose |
* | (acc-G) -(acc+G).skewsymmetric |
*/
tmp = meas-G;
H(0,0) = 0;
H.block<3,1>(1,0) = tmp;
H.block<1,3>(0,1) = tmp.transpose()*(-1);
tmp = tmp+G+G;
Matrix3f matrix;
matrix = skew_symmetric_matrix(0,(float)tmp(0),(float)tmp(1),(float)tmp(2));
H.block<3,3>(1,1) = matrix*(-1);
//H = H*(0.5);
cout<<"H"<<endl;
cout<<H<<endl;
cout<<"H*X"<<endl;
std::cout<<H*X<<std::endl;
}
void Akf::setup(){
X_prev = Vector4f::Zero(4,1);
X_prev(0) = 1;
Q = Matrix4f::Zero(4,4);
Z = Vector4f::Zero(4,1);
R = Matrix4f::Identity(4,4);
P_prev = Matrix4f::Identity(4,4);
P_prev = P_prev*(0.1);
I = Matrix4f::Identity(4,4);
sum = Vector4f::Zero(4,1);
noise_sum = Matrix4f::Zero(4,4);
counter=1;
}
void Akf::predict_state(){
cout<<(60*counter%360)<<endl;
X = A*X_prev;
A_T = A.transpose();
P = A*P_prev*A_T+Q;
}
void Akf::update_state(){
Matrix4f PH_T;
Matrix4f tmp;
PH_T = P*H.transpose();
S = H*PH_T+R;
if (S.determinant()!= 0 )
{
tmp = S.inverse();
K = P*H*tmp;
//std::cout<<"K"<<std::endl;
//std::cout<<K<<std::endl;
X_updated = (I-K*H)*X;
X_updated = X_updated /(X_updated.norm());
P_updated = (I-K*H)*P;
}
else{
X_updated = X;
std::cout<< "error-tmp not inversible!"<<std::endl;
}
X_prev = X_updated;
P_prev = P_updated;
}
void rotation_matrix(Vector4f q,Matrix3f &rot_matrix){
int i;
for (i=1;i<4;i++){
q(i) = q(i)*(-1);
}
Matrix3f matrix;
matrix(0,0) = pow((float)q(0),2)+pow((float)q(1),2)-pow((float)q(2),2)-pow((float)q(3),2);
matrix(0,1) = 2*(q(1)*q(2)-q(0)*q(3));
matrix(0,2) = 2*(q(0)*q(2)+q(1)*q(3));
matrix(1,0) = 2*(q(1)*q(2)+q(0)*q(3));
matrix(1,1) = pow((float)q(0),2)-pow((float)q(1),2)+pow((float)q(2),2)-pow((float)q(3),2);
matrix(1,2) = 2*(q(2)*q(3)-q(0)*q(1));
matrix(2,0) = 2*(q(1)*q(3)-q(0)*q(2));
matrix(2,1) = 2*(q(0)*q(1)+q(2)*q(3));
matrix(2,2) = pow((float)q(0),2)-pow((float)q(1),2)-pow((float)q(2),2)+pow((float)q(3),2);
rot_matrix = matrix;
}
Vector3f generate_akf_random_measurement(Vector4f state){
int i;
//compute quaternion rotation matrix
Matrix3f rot_matrix;
rotation_matrix(state,rot_matrix);
//rot_matrix*acceleration in NED = acceleration in body-fixed frame
Vector3f true_value = rot_matrix*G;
std::cout<<"true value"<<std::endl;
std::cout<<true_value<<std::endl;
for (i=0;i<3;i++){
noisy_value(i) = true_value(i) + (-1) + (float)(rand()/(float)(RAND_MAX/2));
}
return (noisy_value);
}
int main(){
float gx,gy,gz,dt;
gx =60; gy=0; gz =0; //for testing, let it rotate around x axis by 60 degree
myakf.state_transition_matrix(dt,gx,gy,gz); // dt is elapsed time
myakf.predict_state();
Vector4f state = myakf.get_predicted_state();
Vector3f meas = generate_akf_random_measurement(state);
myakf.observation_model_matrix(meas);
myakf.measurement_noise();
myakf.update_state();
q = myakf.get_updated_state();
The problem that I face is that my code does not work.The prediction stage works fine but the updated quaternion state is only correct for the first few iterations and it starts to drift away from the correct value. I have checked my code against the research paper multiple times and ensured that it is in accordance with the algorithm proposed by the research paper.
In my test, I am rotating around x axis by 60 degree per iterations. The number below the started is the angle of rotation. state and updated state is the predicted and updated quaternion respectivly while true value, meas, result are acceleration due to gravity in body frame.As the test result indicates, everything is way off after rotating 360 degrees.
The following is my test result:
1
started
60
state
0.866025
0.5
0
0
true value
0
8.49571
4.905
meas
0.314533
7.97407
4.98588
updated state
0.866076
0.499913
-2.36755e-005
1.56256e-005
result
0.000555564
8.49472
4.90671
1
started
120
state
0.500087
0.865975
-2.83164e-005
1.69446e-006
true value
0.000306622
8.4967
-4.90329
meas
-0.532868
8.79841
-4.80453
updated state
0.485378
0.862257
-0.129439
-0.064549
result
0.140652
8.37531
-5.10594
1
started
180
state
-0.0107786
0.989425
-0.0798226
-0.12062
true value
-2.35843
-0.0203349
-9.52226
meas
-1.39627
-0.889284
-8.74243
updated state
-0.0195091
0.981985
-0.151695
-0.110965
result
-2.19598
-0.0456112
-9.56095
1
started
240
state
-0.507888
0.840669
-0.0758893
-0.171946
true value
-3.59229
-8.12105
-4.16894
meas
-4.52356
-7.73113
-4.98735
updated state
-0.53758
0.811101
-0.212643
-0.0889171
result
-3.65783
-8.18397
-3.98485
1
started
300
state
-0.871108
0.433644
-0.139696
-0.183326
true value
-3.94732
-6.909
5.73763
meas
-4.36385
-6.98853
5.39759
updated state
-0.86404
0.436764
-0.102296
-0.228487
result
-3.69216
-6.94565
5.86192
1
started
0
state
-0.966663
-0.0537713
0.0256525
-0.249024
true value
0.749243
0.894488
9.74036
meas
-0.194541
0.318586
10.1868
updated state
-0.78986
-0.0594022
0.0311688
-0.609607
result
1.1935
0.547764
9.72171
1
started
60
state
-0.654338
-0.446374
0.331797
-0.512351
true value
8.74674
2.39526
3.74078
meas
9.36079
2.96653
3.57115
updated state
-0.52697
-0.512048
0.221843
-0.64101
result
8.73351
2.50411
3.70018
Can someone help me confirm that my understanding about the theory of this quaternion kalman filter and my pseudo code is correct? Also, if anyone has implemented attitude estimation using maybe a different version of quaternion kalman filter, I would greatly appreciate if you can provide a pseudo code and a little explanation.
Thank you guys very much!
|
Assume that I have a rigid body for which I know that it can rotate with respect to a global reference frame (which is considered fixed and already given) for only a few degrees of angle, so I can describe its rotation by using the small angle approximation. For this system, I would like to know if there is a rotation representation that offers more accuracy when compared with other representation methods.
The main representation methods that I considered are the euler angles and the pitch-yaw-roll transformation. To my perception, I think that pitch-yaw-roll representation is expected to be more accurate, since all the angles are expressed with respect to the initial coordinate frame. On the other hand, euler angles are defined on different frames, so I am not sure if the resulting angles will be really small.
To sum up, I know that the body can rotate for only a few degrees and I would like to know which coordinate representation is much probable to deliver the smallest angles, such that the small angle approximation is more valid.
It could also be the case that there is not a general answer (so it depends on the specific configuration) but still I haven't found anything about this topic on the related literature!
Edit: This question is not related to numerical issues. Therefore, it is assumed that all the possible rotation matrix descriptions (Euler, PYR etc) result in the same, exact coordinate vector. Therefore, the question is if there exists one parametrization that is composed of the smallest possible angles.
Example (no small angle approx used): Assume I have a coordinate frame which describes a point in space by the following vector
$P2=\begin{bmatrix} 4 \\ 1 \\ 0.05 \end{bmatrix}$.
Given another coordinate frame which is rotated with respect to the previous one, the description of the same point is given by
$P1=\begin{bmatrix} 3.8933 \\
1.3566 \\
-0.0630 \end{bmatrix}$.
Using Euler angles, I can find that the rotation matrix $R_{euler}$ is characterized by the angles $0.1,0.2,0.1$ rads, which correspond to the angle of rotation around z axis, the rotation around the resulting y axis and the rotation around the resulting z axis, respectively (these are basic stuff, it is explained in many books.). So I have that $P1=R_{euler} P2$.
Now I want to find the corresponding rotation matrix if I use the pitch-yaw-roll representation. Here I have to solve an optimization problem and the solution that I get (maximum error between P1 and the estimated P1 is $3 \times 10^{-8}$) delivers me the following angles
$\begin{bmatrix} -0.0103 \\ 0.0257 \\ 0.0902\end{bmatrix}$,
which correspond to the rotation around the x,y and z axis of the initial coordinate frame.
|
I have assembled a 4WD car using kits I bought on ebay.
I have 4 motors similar to this one: .
The description says:
Operating voltage: 3V~12VDC
(recommended operating voltage of about 6 to 8V)
Maximum torque: 800gf cm min (3V)
No-load speed: 1:48 (3V time)
The load current: 70mA (250mA MAX) (3V)
This motor with EMC, anti-interference ability.
The microcontroller without interference.
Size: 7x2.2x1.8cm(approx)
I am not too fond of the max speed I can reach, but I would be able to provide more power, because I have a 12V 2A battery onboard.
So far I have used 6V, because that seemed to be the safer voltage choice.
Has anybody tried successfully higher voltages, without wearing down the motor in few hours (I've read this can happen)?
Alternatively, can someone recommend replacement motors that would tolerate reliably a higher power envelope?
I would like to preserve the gearbox and replace only the motor, if possible.
I think I could fit a motor 2-4 mm longer (replacing the transparent strap which bonds it to the gearbox), if that makes any difference.
BTW, I'm making the assumption:
higher_voltage => higher_torque => higher_speed
but I'm not sure it's overall correct.
I expect that it would at least produce higher acceleration during the transients.
|
I am trying to implement a path planner to generate a path that moves the robot from q_start to q_goal.
Q_goal is extracted from a stereo camera mounted on the tool, from
which I extract x,y,z coordinates of the desired position, the rotation can be arbitrary.
The robot I am using is an industrial ur5 robot arm, the software I use is capable of performing Jacobian based inverse kinematics given a transformation matrix with rotation and translation.
my inverse kinematics provide me with only one solution, which is ok, but doesn't provide me flexibility for path planning...
How do I using inverse kinematics determine all possible q-configurations that fulfills my criteria of having the desired x,y,z coordinates?
|
I'am trying to implement a path following algorithm based on MPC (Model Predictive Control), found in this paper : Path Following Mobile Robot in the Presence of Velocity Constraints
Principle: Using the robot model and the path, the algorithm predict the behavior of the robot over N future steps to compute a sequence of commands $(v,\omega)$ to allow the robot to follow the path without overshooting the trajectory, allowing to slow down before a sharp turn, etc.
$v:$ Linear velocity
$\omega:$ Angular velocity
The robot: I have a non-holonomic robot like this one (Image extracted from the paper above) :
Here is my problem: Before implementing on the mobile robot, I'am trying to compute the needed matrices (using Matlab) to test the efficiency of this algorithm. At the end of the matrices computation some of them have dimension mismatch
What I did:
For those interested, this calculation is from §4 (4.1, 4.2, 4.3, 4.4) p6-7 of the paper.
4.1 Model
$z_{k+1} = Az_k + B_\phi\phi_k + B_rr_k$ (18)
with:
$A = \begin{bmatrix} 1 & Tv \\ 0 & 1 \end{bmatrix}$
$B_\phi = \begin{bmatrix} {T^2\over2}v^2\\ Tv \end{bmatrix}$
$B_r = \begin{bmatrix} 0 & -Tv \\ 0 & 0 \end{bmatrix}$
$T$: sampling period
$v$: linear velocity
$k$: sampling index (i.e. $t= kT$)
$z_k:$ the state vector $z_k = (d_k, \theta_k)^T$ position and angle difference to the reference path
$r_k:$ the reference vector $r_k = (0, \psi_k)^T$ with $\psi_k$ is the reference angle of the path at step k
4.2 Criterion
The predictive receding horizon controller is based on a minimization of the criterion
$J= \Sigma^N_{n=0} (\hat{z}_{k+n} - r_{k+n})^T Q(\hat{z}_{k+n} - r_{k+n}) + \lambda\phi^2_{k+n}$, (20)
Subject to the inequality constraint
$ P\begin{bmatrix} v_n \\ v_n\phi_n \end{bmatrix} \leq q,$
$n=0,..., N,$
where $\hat{z}$ is the predicted output, $Q$ is a weight matric, $\lambda$ is a scalar weight, and $N$ is prediction horizon.
4.3 Predictor
An n-step predictor $\hat{z}_{k+n|k}$ is easily found from iterating (18). Stacking the predictions $\hat{z}_{k+n|k},n = n,...,N$ in the vector $\hat{Z}$ yields
$\hat{Z} = \begin{bmatrix} \hat{z}_{k|k} \\ \vdots \\ \hat{z}_{k+N|k}\end{bmatrix} = Fz_k + G_\phi\Phi_k + G_rR_k$ (22)
with
$\Phi_k = \begin{bmatrix} \phi_k, \ldots, \phi_{k+N}\end{bmatrix}^T$,
$R_k = \begin{bmatrix} r_k, \ldots, r_{k+N}\end{bmatrix}^T$,
and
$F = \begin{bmatrix}I & A & \ldots & A^N \end{bmatrix}^T$
$G_i = \begin{bmatrix} 0 & 0 & \ldots & 0 & 0 \\ B_i & 0 & \ldots & 0 & 0 \\ AB_i & B_i & \ddots & \vdots & \vdots \\ \vdots & \ddots & \ddots & 0 & 0 \\ A^{N-1}B_i & \ldots & AB_i & B_i & 0 \end{bmatrix}$
where index $i$ should be substituted with either $\phi$ or $r$
4.4 Controller
Using the N-step predictor (22) simplifies the criterion (20) to
$J_k = (\hat{Z}_k - R_k)^T I_q (\hat{Z}_k - R_k) + \lambda\Phi^T_k\Phi_k$, (23)
where $I_q$ is a diagonal matrix of appropriate dimension with instances of Q in the diagonal. The unconstrained controller is found by minimizing (23) with respect to $\Phi$:
$\Phi_k = -L_zz_k - L_rR_k$, (24)
with
$L_z = (lambda + G^T_wI_qG_w)^{-1}G^T_wI_qF$
$L_r = (lambda + G^T_wI_qG_w)^{-1}G^T_wI_q(Gr - I)$
I'am trying to compute $\Phi_k = -L_zz_k - L_rR_k$ but the dimension of $L_r$ and $R_k$ does not match for matrix multiplication.
Parameters are :
$T=0.1s$
$N=10$
$\lambda=0.0001$
$Q=\begin{bmatrix} 1 & 0 \\ 0 & \delta \end{bmatrix}$ with $\delta=0.02$
I get :
$R_k$ a (11x2) matrix (N+1 elements of size 2x1, transposed)
$G_w$ a (22x11) matrix
$G^T_w$ a (11x22) matrix
$I_q$ a (22x22) matrix
$F$ a (22x2) matrix
$G_r$ a (22x22) matrix
so Lz computation gives (according to the matrix sizes)
$L_z=(scalar + (11x22)(22x22)(22x11))^{-1} (11x22)(22x22)(22x22)$
a (11x2) matrix.
as $z_k$ is (2x1) matrix, doing $L_zz_k$ from (24) is fine.
and Lr computation gives (according to the matrix sizes)
$L_r=(scalar + (11x22)(22x22)(22x11))^{-1} (11x22)(22x22)((22x22) - (22x22))$
a (11x22) matrix.
as $R_k$ is (11x2) matrix, doing $L_rR_k$ from (24) is not possible.
I have a (11x22) matrix multiplicated by a (11x2) matrix.
I'm sure I'm missing something big here but unable to see what exactly.
Any help appreciated.
Thanks
|
What are the main differences between electric motor and internal combustion engine for an ATV-sized mobile robot platform in terms of functionality, implementation difficulty ("RC" conversion, "electronic" operation), durability and maintenance when used as an autonomous platform? A full sized ATV/UTV like Polaris Ranger (EV) is in question.
Are the advantages/disadvantages basically the same as the differences between electric and nitro RC cars or does the bigger scale adds something important to the game? I can think of the main differences like bigger range and faster "refueling" with IC and less maintenance with electric but I am interested in a detailed comparison.
The transmission for the IC engine is considered to be automatic.
EDIT: The fuel injection for IC is considered to be electronic (EFI) but I do not know whether that also means the "electronic" throttle (no mechanical wire as with carburetor?). Whatever the throttle may be I see the lag between its "actuation" and the engine running into higher RPM and giving more power/speed as the main disadvantage for IC control - however, it may probably be quite easy dealt with in software (by adding some timeout when checking desired RPM).
|
I am a uncertain about how to compute the right homogeneous transformation matrix to compute an inverse kinematic Q-configuration.
Looking at robot like this
Where at the end of this robot I have a camera mounted on to it.
The purpose of my application is to make the robot follow an object, so basically tracking it. The camera provide me with an X,Y,Z coordinate, which the position i want place my robot arm.
First question - How do i set up the desired homogenous transformation matrix?
The way i see it, I have 2 transformation matrices being T_tool_base and T_world_tool which become T_world_base = (T_tool_base) (T_world_tool)
My question is that how do i compute my desired transformation matrix.
I think i know how i should setup the transformation matrix for the camera which would be like this
T_world_tool = 0 0 0 x
0 0 0 y
0 0 0 z
0 0 0 1
(Second question is regarding the rotation matrix, how do prescribe such that rotation in arbitrary as long the endpoint has the desired position in the world frame?)
but what should t_tool_base entail? should it entail the transformation of its current state or the desired transformation, and if so how do i extract the desired t_tool_base transformation?...
|
I have some data obtained from an experiment in terms of movements and observations with odometry and sensor data. My task is to find the probability mass on each of the grid cells after each set of motion and observation. I'm a bit lost in figuring out how to compute probability mass for each of the grid cell.
My odometry information is in terms of rotation, translation and rotation and my sensor information is in terms of range and bearing angle.
How do I calculate the probability of robot present in each of the grid cell?
I have the formula for belief after motion as summation(P(x|u, x')xBel(x'))
How do I compute the motion model with noise?
|
Good day,
I would like to ask why is it that when I add the Yaw control to my PID controller for each motor. The quadcopter refuses to take off or maintain its altitude. I am curently using a Cascaded PID controller for attitude hold using an Accelerometer, a Magnetometer and a Gyroscope, and a 40Hz Ultrasonic Sensor for Altitude Hold. Since the scope is indoor I have done away with the barometer due to its +-12m error.
Resulting Response
Without Yaw Control, the plot below shows the response of the quadrotor.
With Yaw Control, the plot below shows the response of the quadrotor.
Debugging
I found out that each of the outputs from each PID's give a too high of a value such that when summed together goes way over the PWM limit of 205 or Full Throttle.
Without yawPID contribution
The limiter kicks in without damaging the desired response of the system thus is still able to fly albeit with oscillatory motion along the z axis or height
With yawPID contribution
The added yaw components increases the sum of the PID's way above the limit thus the limiter compesates the excess too much resulting in an over all lower PWM output for all motors thus the quad never leaves the ground.
//Motor Front Left (1)
float motorPwm1 = pitchPID + rollPID + yawPID + baseThrottle + baseCompensation;
//Motor Front Right (2)
float motorPwm2 = pitchPID - rollPID - yawPID + baseThrottle + baseCompensation;
//Motor Back Left (3)
float motorPwm3 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation;
//Motor Back Right (4)
float motorPwm4 = -pitchPID - rollPID + yawPID + baseThrottle + baseCompensation;
Background
The PID parameters for the Pitch, Yaw and Roll were tuned individually meaning, the base throttle was set to a minimum value required for the quadcopter to be able to lift itself.
The PID parameters for the Altitude Sensor is tuned with the other controllers active (Pitch and Roll).
Possible Problem
Limiter algorithm
A possible problem is that the algorithm I used to limit the maximum and the minimum throttle value may have caused the problem. The following code is used to maintain the ratio of the motor values instead of limiting them. The code is used as a two stage limiter. In the 1st stage, if one motorPWM is less than the set baseThrottle, the algorithm increases each motor PWM value until none of them are below that. In the 2nd stage, if one motorPWM is more than the set maxThrottle, the algorithm decreases each motor PWM value until none of them are above that.
//Check if PWM is Saturating - This method is used to fill then trim the outputs of the pwm that gets fed into the gpioPWM() function to avoid exceeding the earlier set maximum throttle while maintaining the ratios of the 4 motor throttles.
float motorPWM[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4};
float minPWM = motorPWM[0];
int i;
for(i=0; i<4; i++){ // Get minimum PWM for filling
if(motorPWM[i]<minPWM){
minPWM=motorPWM[i];
}
}
cout << " MinPWM = " << minPWM << endl;
if(minPWM<baseThrottle){
float fillPwm=baseThrottle-minPWM; //Get deficiency and use this to fill all 4 motors
cout << " Fill = " << fillPwm << endl;
motorPwm1=motorPwm1+fillPwm;
motorPwm2=motorPwm2+fillPwm;
motorPwm3=motorPwm3+fillPwm;
motorPwm4=motorPwm4+fillPwm;
}
float motorPWM2[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4};
float maxPWM = motorPWM2[0];
for(i=0; i<4; i++){ // Get max PWM for trimming
if(motorPWM2[i]>maxPWM){
maxPWM=motorPWM2[i];
}
}
cout << " MaxPWM = " << maxPWM << endl;
if(maxPWM>maxThrottle){
float trimPwm=maxPWM-maxThrottle; //Get excess and use this to trim all 4 motors
cout << " Trim = " << trimPwm << endl;
motorPwm1=motorPwm1-trimPwm;
motorPwm2=motorPwm2-trimPwm;
motorPwm3=motorPwm3-trimPwm;
motorPwm4=motorPwm4-trimPwm;
}
This was obtained from pixhawk. However the difference is that they employ only upper bound compensation limiting, while mine also performs lower bound compensation limiting which may cause more saturation once it reaches the second stage.
From:https://pixhawk.org/dev/mixing
Gains are set too high.
It is also possible that I've set my P gains too high thus exceeding the max RPM limit of the motors causing the Limiter algorithm to overcompensate.
Current PID Settings:
The minimum motor value for the quad to lift itself is 160 while the maximum limit is 200 from the PWM time high of 2000ms
Pitch (Cascaded P-PID controller)
Rate P = 0.07
Rate I = 0.03
Rate D = 0.0001
Stabilize P = 2
Roll (Cascaded P-PID controller)
Rate P = 0.09
Rate I = 0.03
Rate D = 0.0001
Stabilize P = 2
Yaw (Cascaded P-PID controller)
Rate P = 0.09
Rate I = 0.03
Rate D = 0.0001
Stabilize P = 2
Hover (Single loop PD controller)
P = 0.7
D = 35
Possible Solution
I think I have set the PID parameters particularly the P or D gain too high that the computed sum of the outputs of the controller is beyond the limit. Maybe retuning them would help.
I would just like to ask if anyone has encountered this problem or if you have any suggestions. Thank you :)
EDIT
I have added the plots of the response when the control loop is fast (500Hz) and Slow (300Hz)
500Hz: Does not fly
300Hz: Flies
|
I have trouble estimating the heading when close to the "pivot" point of the compass, and could use some input on how to solve it. I have set up my angles to be from 0-360 degrees during the testing but will be using radians (-pi, pi) from now on.
The setup is a differential robot with IMU, wheel encoders and a magnetic compass.
A complementary filter is used for fusing gyroZ and odo measurements to estimate the heading, and then correct it with a Kalman filter using the magnetic compass.
My problem occurs when the robot heading is close to -pi/pi .
The estimated heading is useless even though the robot is not even moving.
I am thinking this must be a very common problem and probably has a better solution than what I came up with, which was either re-initializing the integrator when crossing zero, adding 180 degrees each time the error is larger, or just ignoring the compass if the error is too large...
It's my first Kalman filter so I may have made a poor implementation if this is not a common issue...
Edit: trudesagen's solution solved my problem.
|
I'm using processing to send strings to Arduino, using functions like
else {
int u=90;
port.write(u+"z");
}
on the processing side and in the Arduino side I'm using calls like
case 'z':
z.write(v);
v = 0;
break;
case 'L':
z.write(0);
//v = 0;
break;
}
yet I can't get the servo to stop at all. How do I make it shut off?
If it was a regular servo I wouldn't even ask because that's easy but I write 0 or 90 or LOW and nothing, it just keeps spinning in one direction but when it meets one of the conditions in my statements it switches polarity/direction and that's good - I want that but I made this function to make it stop and it is not doing so, does anyone have any ideas ?
I am using a Parallax Continuous Rotation Servo.
|
When running on a hard surface, the Create will shake sometimes during turns or acceleration.
Has anyone ever removed the springs or pinned the wheels in place so they can't move up and down?
|
I was wondering if a 1D point mass (a mass which can only move on a line, accelerated by an external time-varying force, see Wikipedia - Double integrator) is a holonomic or a nonholonomic system? Why?
I think that it is nonholonomic since it cannot move in any direction in its configuration space (which is 1D, just the $x$ axis). E.g. if the point mass is moving at $$x=10$$ with a velocity of 100 m/s in positive $x$-direction it cannot immediately go to $$x=9.9$$ due to its inertia. However, I have the feeling that my thoughts are wrong...
The background is the following:
I am trying to understand what holonomic and nonholonomic systems are. What I found so far:
Mathematically:
Holonomic system are systems for which all constraints are integrable into positional constraints.
Nonholonomic systems are systems which have constraints that are nonintegrable into positional constraints.
Intuitively:
Holonomic system where a robot can move in any direction in the configuration space.
Nonholonomic systems are systems where the velocities (magnitude and or direction) and other derivatives of the position are constraint.
|
I will have this configuration:
A2212 Brushless Motor 1000KV - 4 each
ECS - 30A Electronic Speed Control (ESC) - 4 each
Propeller - 1045 Propeller CW & CCW Pair 10 inch * 4.5 pitch
Arduino Mega - 2560 board
Raspberry Pi 3
Open pilot CC3D flight controller
I want to know what rating Li-Po battery should I get for this configuration.
The reason behind my asking here is because a simple google search is not able to satisfy me with an explanation...
Also, my weight will be 1.5 kg for the quadcopter, so I need a stable current discharge.
This is my first quadcopter, I am a Computer Science guy, so I have little knowledge of electronics, I'm learning, but need help...
|
I'm still new to RPi and I am currently trying to do a smart home model.
I planned to use RPi only to control 5 servos (which will be controlling the open/close of the doors by setting the angle) and 5 LEDs.
Will I need to use an external circuit to supply the power for the servos or is it fine to just connect them to the RPi?
|
I'm doing a mobile robot project with robotic arms, I wanted to buy a chassis for my robot that can carry enough weight, but many websites don't give definitive answers about maximum payload.
Is there is a way to figure this out just by knowing details about the motors?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.