instruction
stringlengths 40
28.9k
|
---|
Good day,
I am currently implementing a single loop PID controller using angle setpoints as inputs. I was trying out a different approach for the D part of the PID controller.
What bought this about is that when I was able to reach a 200Hz (0.00419ms) loop rate, when adding a D gain, the quadcopter seems to dampen the movements in a non continous manner. This was not the case when my algorithm was running at around 10Hz. At an angle set point of 0 degrees, I would try to push it to one side by 5 degrees then the quad would try to stay rock solid by resisting the movements but lets go after while enabling me to get it of by 2 degrees (the dampening effect weakens over time) then tries to dampen the motion again.
This is my implementation of the traditional PID:
Derivative on Error:
//Calculate Orientation Error (current - target)
float pitchError = pitchAngleCF - pitchTarget;
pitchErrorSum += (pitchError*deltaTime2);
float pitchErrorDiff = pitchError - pitchPrevError;
pitchPrevError = pitchError;
float rollError = rollAngleCF - rollTarget;
rollErrorSum += (rollError*deltaTime2);
float rollErrorDiff = rollError - rollPrevError;
rollPrevError = rollError;
float yawError = yawAngleCF - yawTarget;
yawErrorSum += (yawError*deltaTime2);
float yawErrorDiff = yawError - yawPrevError;
yawPrevError = yawError;
//PID controller list
float pitchPID = pitchKp*pitchError + pitchKi*pitchErrorSum + pitchKd*pitchErrorDiff/deltaTime2;
float rollPID = rollKp*rollError + rollKi*rollErrorSum + rollKd*rollErrorDiff/deltaTime2;
float yawPID = yawKp*yawError + yawKi*yawErrorSum + yawKd*yawErrorDiff/deltaTime2;
//Motor Control - Mixing
//Motor Front Left (1)
float motorPwm1 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation;
What I tried to do now is to implement a derivative on measurement method from this article to remove derivative output spikes. However the Derivative part seems to increase the corrective force than dampen it.
Derivative on Measurement:
//Calculate Orientation Error (current - target)
float pitchError = pitchAngleCF - pitchTarget;
pitchErrorSum += (pitchError*deltaTime2);
float pitchErrorDiff = pitchAngleCF - pitchPrevAngleCF; // <----
pitchPrevAngleCF = pitchAngleCF;
float rollError = rollAngleCF - rollTarget;
rollErrorSum += (rollError*deltaTime2);
float rollErrorDiff = rollAngleCF - rollPrevAngleCF; // <----
rollPrevAngleCF = rollAngleCF;
float yawError = yawAngleCF - yawTarget;
yawErrorSum += (yawError*deltaTime2);
float yawErrorDiff = yawAngleCF - yawPrevAngleCF; // <----
yawPrevAngleCF = yawAngleCF;
//PID controller list // <---- The D terms are now negative
float pitchPID = pitchKp*pitchError + pitchKi*pitchErrorSum - pitchKd*pitchErrorDiff/deltaTime2;
float rollPID = rollKp*rollError + rollKi*rollErrorSum - rollKd*rollErrorDiff/deltaTime2;
float yawPID = yawKp*yawError + yawKi*yawErrorSum - yawKd*yawErrorDiff/deltaTime2;
//Motor Control - Mixing
//Motor Front Left (1)
float motorPwm1 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation;
My question now is:
Is there something wrong with my implementation of the second method?
Source: http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-derivative-kick/
The way I've obtained the change in time or DT is by taking the timestamp from the start of the loop then taking the next time stamp at the end of the loop. Their difference is obtained to obtain the DT. getTickCount() is an OpenCV function.
/* Initialize I2c */
/* Open Files for data logging */
while(1){
deltaTimeInit=(float)getTickCount();
/* Get IMU data */
/* Filter using Complementary Filter */
/* Compute Errors for PID */
/* Update PWM's */
//Terminate Program after 40 seconds
if((((float)getTickCount()-startTime)/(((float)getTickFrequency())))>20){
float stopTime=((float)getTickCount()-startTime)/((float)getTickFrequency());
gpioPWM(24,0); //1
gpioPWM(17,0); //2
gpioPWM(22,0); //3
gpioPWM(18,0); //4
gpioTerminate();
int i=0;
for (i=0 ; i < arrPitchCF.size(); i++){
file8 << arrPitchCF.at(i) << endl;
}
for (i=0 ; i < arrYawCF.size(); i++){
file9 << arrYawCF.at(i) << endl;
}
for (i=0 ; i < arrRollCF.size(); i++){
file10 << arrRollCF.at(i) << endl;
}
for (i=0 ; i < arrPitchAccel.size(); i++){
file2 << arrPitchAccel.at(i) << endl;
}
for (i=0 ; i < arrYawAccel.size(); i++){
file3 << arrYawAccel.at(i) << endl;
}
for (i=0 ; i < arrRollAccel.size(); i++){
file4 << arrRollAccel.at(i) << endl;
}
for (i=0 ; i < arrPitchGyro.size(); i++){
file5 << arrPitchGyro.at(i) << endl;
}
for (i=0 ; i < arrYawGyro.size(); i++){
file6 << arrYawGyro.at(i) << endl;
}
for (i=0 ; i < arrRollGyro.size(); i++){
file7 << arrRollGyro.at(i) << endl;
}
for (i=0 ; i < arrPWM1.size(); i++){
file11 << arrPWM1.at(i) << endl;
}
for (i=0 ; i < arrPWM2.size(); i++){
file12 << arrPWM2.at(i) << endl;
}
for (i=0 ; i < arrPWM3.size(); i++){
file13 << arrPWM3.at(i) << endl;
}
for (i=0 ; i < arrPWM4.size(); i++){
file14 << arrPWM4.at(i) << endl;
}
for (i=0 ; i < arrPerr.size(); i++){
file15 << arrPerr.at(i) << endl;
}
for (i=0 ; i < arrDerr.size(); i++){
file16 << arrDerr.at(i) << endl;
}
file2.close();
file3.close();
file4.close();
file5.close();
file6.close();
file7.close();
file8.close();
file9.close();
file10.close();
file11.close();
file12.close();
file13.close();
file14.close();
file15.close();
file16.close();
cout << " Time Elapsed = " << stopTime << endl;
break;
}
while((((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())))<=0.00419){ //0.00209715|0.00419
cout << " DT end = " << deltaTime2 << endl;
deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency()));
}
cout << " DT end = " << deltaTime2 << endl;
}
Here's my data:
|
I am using 2 identical DC motors and a castor wheel. The motors are connected to L293D motor driver and are controlled by RPi.
The robot is not going straight. It veers off to the right. I am running both the motors at 100% PWM.
What I tried to correct the error:
I adjusted the PWM of the wheel going faster to 99%, but the robot just turns to the other side;
I adjusted the weight on the robot and the problem still persists.
I once tried to run the motor without any load. Is that the cause of this, as I was later told that, running a DC motor without any load damages them?
If that is not the cause, then please tell me how to solve this problem without using any sensors for controlling it.
|
I am controlling a robot via usb from an Android phone running the robot's code. This phone has a poor battery and I need to extend its life with a USB charger (can't change phones). How can I charge an android phone via usb, while maintaining a USB connection to the robot? I can solder wires together if needed, or can buy adapters as needed.
|
I have a computer-vision application I made to localize a robot in a room; the software has been in use for a while and is working fine.
When I calibrated the camera and got the intrinsics and lens distortion coefficients there was a lens protector on the lens, mounted on the robot's lid.
If I take off the robot's lid (and thus the lens protector) the localization solution becomes erratic and inaccurate, so I think the lens protector might be changing the distortion properties significantly.
Today the lens protector became detached and it was replaced shortly after. So now the calibration may no longer be valid and the localization solution is much more noisy.
Can a lens protector can greatly effect the distortion properties of the image, or can someone offer another explanation?
I intend to recalibrate and super-glue the lens protector down to the robot's lid, but I am curious if this is my problem, and if anyone else has encountered this with lens protectors.
|
I understand that to be able to define point in 3D space, you need three degrees of freedom (DOFs). To additionally define an orientation in 3D space, you need 6 DOFs. This is intuitive to me when each of these DOFs defines the position or orientation along one axis or an orthogonal X-Y-Z system.
However, consider a robot arm such as this: http://www.robotnik.eu/robotics-arms/kinova-mico-arm/. This too has 6 DOFs, but rather than each DOF defining a position or orientation in an X-Y-Z system, it defines an angular rotation of one joint along the arm. If all the joints were arranged along a single axis, for example, then these 6 DOFs would in fact only define one angular rotation.
So, it is not true that each DOF independently defines a single position or orientation. However, in the case of this robot arm, it can reach most positions and orientations. I'm assuming this is because the geometry of the links between the joints make each DOF define an independent position or orientation, but that is a very vague concept to me and not as intuitive as simply having one DOF per position or orientation.
Can somebody offer some help in understanding these concepts?
|
The target is in the shape of a U where the horizontal segment is 20 inches, and the two vertical segments are 14 inches. We are using a camera to image the target, and then using vision processing to isolate the target from the rest of the image. We know the vertical field of view, and the horizontal field of view of the camera. The resolution of the camera is 640x480 pixels.
The vertical distance between the camera on the robot and the target is constant but as of yet unknown because the robot hasn't been constructed yet. It is known, however, that the target will always have a higher elevation than the camera.
How can we use this data to calculate in real time the robot's distance to the target, and the angle to the target?
|
So I have multi-rotor with a basic PID controller, that keeps its axis stable through the gyroscope. However, the multi rotor, does not keep its height or position. So I would like to use an accelerometer for keeping its rough position (auto level). I want to use both the gyro and accelerometer, but how would the accelerometer values be used, is it implemented through the PID the same ways the gyro values are (degrees per second, which is the rate I used to calculate PID)? And then adjusting the esc through that?? I am confused at that part (the basic logic for using the accelerometer values)
|
I have bought an STM iNEMO evaluation board in order to monitor the inclination of a separate magnetic sensor array as it moves in a linear scan outside of a (non-magnetic) stainless steel pipe. I want to measure the inclination of the sensor along the scan and ensure that it does not change. The problem I have found is that the measured magnetic field from the integrated magnetometer varies greatly with position along the pipe, and in turn, causes a large, position dependent error in one axis of the inclination reported by the iNEMO IMU. In fig. 1 below I show the set up of the test, I measured the inclination from the IMU while moving it along the length of the pipe and back again. The board did not change inclination throughout the measurement. In Fig 2 I show the magnetometer and inclination measurements recorded by the "iNEMO application" showing the large error in one of the inclinations.
My question is whether you know if there is any way of correcting for the magnetic field variation so that I can still accurately determine the inclination in all three directions? My data suggests to me that the magnetic field variation measured from the magnetometer is much greater than the geomagnetic field, so the inclination measurement will always be inaccurate. A follow up question I then have is: Is there a way to measure 3 axis orientation WITHOUT using a magnetometer?
|
Can someone please share the typical cost of material to 3D print an object like a raspberry pi case? Thank you.
|
My understanding of walking robots (e.g. https://www.youtube.com/watch?v=xJlkBBdyBYI) is that they use a gyroscope to determine the current orientation of the robot, or each joint of the robot. This is because if you just put encoders on each joint, the cumulative error over the entire robot will be too large to maintain stability. Therefore, a gyroscope measures the "real" orientation, and this is used for feedback when the robot is walking.
However, I'm also aware that some walking robots use accelerometers to maintain stability. What would be the benefit of using an accelerometer in this case? Would it be used instead of a gyroscope, or together with a gyroscope?
My guess is that gyroscopes do not measure acceleration directly (unless you were to numerically calculate this based on lots of orientation readings), but accelerometers do measure it directly (and more reliably than this numerical method). Know the acceleration as well as the position then enables to robot to more accurately predict its future position, and hence the feedback loop is more robust. Is this correct, or am I missing the point?
|
Hello I'm trying to figure out how modular arm joints are designed and what kind of bearings/shafts are used for a modular-type robotic arm. Take "UR arm" for example. I believe those 'T-shaped pipes' include both a drive and bearing system. And as you can see from second image, it can be detached easily. So I think it's not just a simple "motor shaft connecting to the member that we want to rotate" mechanism. I'm wondering which type of mechanism and bearing system is inside of those T-shaped pipes. How can i transfer rotational motion to a member without using shafts?
|
I am looking at this page that describes various characteristics of gyroscopes and accelerometers. Close to the end (where they speak about IMUs), the names of the items have something like this:
9 degrees of freedom
6 degrees of freedom
Can anyone explain what does this mean?
|
I have a PR2 robot in an environment, which can be seen on the GUI of OpenRAVE.
Now, how can I load a PUMA robot arm in the same environment?
|
I want to switch from usercontrol to autonomous. When I have the program running for 120 seconds, how come it wont automatically switch in autonomous mode? Thanks!
#pragma config(Motor, port1, driveBR, tmotorVex393, openLoop)
#pragma config(Motor, port2, driveFR, tmotorVex393, openLoop)
#pragma config(Motor, port3, driveFL, tmotorVex393, openLoop)
#pragma config(Motor, port4, flyRight, tmotorVex393, openLoop)
#pragma config(Motor, port5, driveBL, tmotorVex393, openLoop)
#pragma config(Motor, port6, flyLeft, tmotorVex393, openLoop)
#pragma config(Motor, port10, Belt, tmotorVex393, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard !!*//
#pragma platform(VEX)
#pragma competitionControl(Competition)
#pragma autonomousDuration(15)
#pragma userControlDuration(120)
#include "Vex_Competition_Includes.c"
//Main competition background code...do not modify!
void pre_auton() {
}
task autonomous() {
while(true == true) {
motor[flyLeft] = -127;
motor[flyRight] = 127;
wait1Msec(500);
motor[Belt] = -127;
}
}
task usercontrol() {
while (true == true) {
motor[driveFR] = -vexRT[Ch2];
motor[driveFL] = vexRT[Ch3];
motor[driveBR] = vexRT[Ch2];
motor[driveBL] = vexRT[Ch3];
if(vexRT[Btn6D] == 1) {
motor[flyRight] = -127;
motor[flyLeft] = -127;
}
if(vexRT[Btn6D] == 0) {
motor[flyRight] = 0;
motor[flyLeft] = 0;
}
if(vexRT[Btn5D] == 1) {
motor[Belt] = -127;
}
if(vexRT[Btn5D] == 0) {
motor[Belt] = 0;
}
}
}
|
I have a small POM bevel gear with these dimensions:
It has a 6mm hole for the shaft and a M4 hole for the set screw.
Suppose this bevel gear is meshed with a 45T bevel gear and give a max. output torque of 0.4kg/cm. How should the design of the 6mm shaft be? Should the diameter be precisely 6mm? Should it be flattened into a 'D' shape (so that the set screw can hold the shaft)? I'm planning to use a metal shaft.
Any help will be appreciated.
Thanks
|
I am a programmer for my school's FRC robotics team and have received the request from our hardware/driving department to limit the speed at which the robot's motors can accelerate given a joystick input telling it to increase the speed of the motor. For example, when the robot first starts up and the driver decides to move the joystick from the center to the fully up position (0 to full motor power), we don't want it to literally go from 0 to full motor power in an instant - it obviously creates some rather jerky, unstable behavior. How might I receive the target joystick position from the joystick, save it, and build up to it over time (and if any other inputs are sent in this process — like telling it to turn around — stop the current process and enact the new one)?
I am using Java with WPILib's 2016 robotics library: here's the API http://first.wpi.edu/FRC/roborio/release/docs/java/, and here's the tutorials http://wpilib.screenstepslive.com/s/4485/m/13809.
I am using the "IterativeRobot" template class, and teleop is being run in the method teleopPeriodic(), which is continuously called every few milliseconds in the program (it's where i'm receiving joystick input and calling the method RobotDrive.tankDrive() with the inputs).
I realize this is more of a programming question than a robotics question, but I figured it would be better to put it here than in stack overflow, etc. If someone could give me some simple pseudocode or just a conceptual idea of how this might be done (not necessarily as it pertains directly to the library or the language I'm using), that would be great.
|
For the last few months I have been playing with ROS on an nVidia Jetson TK1 development board. Up until this point, it has mostly been playing with the GPIO header, an Arduino Uno, a couple physical contact sensors, and a few custom motor and servo boards that I slapped together. But lately I've been eyeing an old 700 series Roomba that has been gathering dust (was replaced by an 800 series).
Does anyone know if the Communication Cable for Create 2 will work with a 700 series Roomba?
I know there are DIY designs out there, but I have always been a fan of using off-the-shelf components if they exist - you rarely save more money than your time is worth if it is something like a cable or similar component. So if the Create 2 cable will work, I'll use that. If not, I'll see what I can do to make my own.
|
My small robot has two motors controlled by an L293D and that is controlled via a Raspberry Pi. They will both go forwards but only one will go backwards.
I've tried different motors and tried different sockets in the breadboard, no luck. Either the L293D's chip is broken (but then it wouldn't go forwards) or I've wired it wrong.
I followed the tutorial, Controlling DC Motors Using Python With a Raspberry Pi, exactly.
Here is a run down of what works. Let the 2 motors be A and B:
When I use a python script (see end of post) both motors go "forwards". When I change the values in the Python script, so the pin set to HIGH and the pin set to LOW are swapped, motor A will go "backwards", this is expected. However, motor B will not move at all.
If I then swap both motors' wiring then the original python script will make both go backwards but swapping the pins in the code will make motor A go forwards but motor B won't move.
So basically, motor A will go forwards or backwards depending on the python code but motor B can only be changed by physically changing the wires.
This is forwards.py
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor2A = 23
Motor2B = 21
Motor2E = 19
Motor1A = 18
Motor1B = 16
Motor1E = 22
GPIO.setup(Motor1A, GPIO.OUT)
GPIO.setup(Motor1B, GPIO.OUT)
GPIO.setup(Motor1E, GPIO.OUT)
GPIO.setup(Motor2A, GPIO.OUT)
GPIO.setup(Motor2B, GPIO.OUT)
GPIO.setup(Motor2E, GPIO.OUT)
print("ON")
GPIO.output(Motor1A, GPIO.HIGH)
GPIO.output(Motor1B, GPIO.LOW)
GPIO.output(Motor1E, GPIO.HIGH)
GPIO.output(Motor2A, GPIO.HIGH)
GPIO.output(Motor2B, GPIO.LOW)
GPIO.output(Motor2E, GPIO.HIGH)
And this is backwards.py
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BOARD)
Motor2A = 21
Motor2B = 23
Motor2E = 19
Motor1A = 16
Motor1B = 18
Motor1E = 22
GPIO.setup(Motor1A, GPIO.OUT)
GPIO.setup(Motor1B, GPIO.OUT)
GPIO.setup(Motor1E, GPIO.OUT)
GPIO.setup(Motor2A, GPIO.OUT)
GPIO.setup(Motor2B, GPIO.OUT)
GPIO.setup(Motor2E, GPIO.OUT)
print("ON")
GPIO.output(Motor1A, GPIO.HIGH)
GPIO.output(Motor1B, GPIO.LOW)
GPIO.output(Motor1E, GPIO.HIGH)
GPIO.output(Motor2A, GPIO.HIGH)
GPIO.output(Motor2B, GPIO.LOW)
GPIO.output(Motor2E, GPIO.HIGH)
If you see this diff https://www.diffchecker.com/skmx6084, you can see the difference:
Below are some pictures. You can use the colour of the cables to link them between pictures
enter image description here
|
I am working on a remote control project that involves using Node.js and Socket.io to transmit joystick data from a webpage to my BeagleBone Black.
However, I am somewhat disappointed with the BeagleBone - it seems like what should be such simple tasks such as connecting to Wi-Fi can be quite tricky...
My question is: Are there alternative boards I should be looking at? Boards that also have Node.js libraries with PWM support, could stream video from a webcam, but are easier to set up and have a larger developer community?
|
I want to find the equations of motion of an RRRR robot.I have studied about it a bit but I am having some confusion.
Here, in one of the lectures I found online, it describes an Inertia matrix of a link as $\bf{I}_i$ which is computed by $\tilde{\bf{I}}_i$ also described below?
In conclusion, the kinetic energy of a manipulator can be determined when, for each link, the following quantities are known:
the link mass $m_i$;
the inertia matrix $\bf{I}_i$, computed with respect to a frame $\mathcal{F}_i$ fixed to the center of mass in which it has a constant expression $\tilde{\bf{I}}_i$;
the linear velocity $\bf{v}_{Ci}$ of the center of mass, and the rotational velocity $\omega_i$ of the link (both expressed in $\mathcal{F}_0$);
the rotation matrix $\bf{R}_i$ between the frame fixed to the link and $\mathcal{F}_0$.
The kinetic energy $K_i$ of the i-th link has the form:
$$
K_i = \frac{1}{2}m_i\bf{v}_{Ci}^T\bf{v}_{Ci} + \frac{1}{2}\omega_i^T\bf{R}_i\tilde{\bf{I}}_i\bf{R}_i^T\omega_i \\
$$
It is now necessary to compute the linear and rotational velocities ($\bf{v}_{Ci}$ and $\omega_i$) as functions of the Lagrangian coordinates (i.e. the joint variables $\bf{q}$).
So $\tilde{\bf{I}}_i$ is computed wrt to fixed frame attached to the centre of mass.
However in another example below from another source there is no rotation matrix multiplication with ${I}_{C_1}$ and $I_{C_2}$ as shown above. Am I missing something?
$\underline{\mbox{Matrix M}}$
$$
M = m_1 J_{v_1}^TJ_{v_1} + J_{\omega_1}^TI_{C_1}J_{\omega_1} + m_2 J_{v_2}^TJ_{v_2} + J_{\omega_2}^TI_{C_2}J_{\omega_2} \\
$$
What is the significance of multiplying Rotation matrix with $I_{C_1}$ or $\tilde{\bf{I}}_i$?
I am using former approach and getting fairly large mass matrix. Is it normal to have such long terms inside a Mass matrix? I still need to know though which method is correct.
(A series of images showing Mathematica output of a 4x4 matrix with some very, very long terms - A, B, C)
The equation I used for the mass matrix is:
$$
\begin{array}{lcl}
K & = & \displaystyle{\frac{1}{2}} \displaystyle{\sum_{i=1}^{n}} m_i\bf{v}_{Ci}^T \bf{v}_{Ci} + \displaystyle{\frac{1}{2}} \displaystyle{\sum_{i=1}^{n}} \omega_i^T\bf{R}_i\tilde{\bf{I}}_i\bf{R}_i^T\omega_i \\
& = & \boxed{ \frac{1}{2} \dot{\bf{q}}^T \sum_{i=1}^{n}\left[ m_i {\bf{J(\bf{q})}_{v}^{i}}^T {\bf{J(\bf{q})}_{v}^i} + {\bf{J(\bf{q})}_{\omega}^i}^T\bf{R}_i\tilde{\bf{I}}_i\bf{R}_i^T\bf{J(\bf{q})}_{\omega}^i \right] \dot{\bf{q}} } \\
& = & \displaystyle{\frac{1}{2}} \dot{\bf{q}}^T\bf{M(q)}\dot{\bf{q}} \\
& = & \displaystyle{\frac{1}{2}} \displaystyle{\sum_{i=1}^{n}} \displaystyle{\sum_{j=1}^{n}} M_{ij}(\bf{q})\dot{q}_i \dot{q}_j \\
\end{array}
$$
|
I'm looking at the assembly of a tail rotor that should look like this
original image
I wonder if the "tail output shaft stopper" (circled in red) is meant to be a bearing or just a piece of metal stopper:
My reading is that, since it's held by 2 set screws, the whole part should rotate with the rod. While rotating, it'd rub against the bevel gear on the tail drive though. Am I missing something?
|
I've looked everywhere I can think of to find this information, but haven't come across anything. Does anyone know what kind of screws I can use to replace the ones on top of my Roomba 530?
I realize that the Create 2 is technically a 600 series, but I would expect they were the same.
I'd like to replace the screws on my Roomba with standoffs so I can stick a mounting plate on top of it. (Additional sensors, CPU, etc.)
|
I have FPV camera which outputs analog video (RCA, PAL).
I want to capture video and do image processing, therefore I need some way to convert the analog video to digital.
Can some one recommend me how to do it? Is there advice or a shield which can assist?
Please note:
I want to convert the frames with minimum latency, because it is a real time flying drone.
I don't need to convert the image to some compressed format (which encoding/ decoding may take time), if I can get the RGB matrix straight, it is preferred.
I thought about digital output camera, but I need one which weighs few grams and I haven't found yet.
|
Scenario
I have 2 roaming robots, each in different rooms of a house, and both robots are connected to the house wifi. Each robot only has access to the equipment on itself.
Question
How can the robots be aware of each other's exact position using only their own equipment and the house wifi?
EDIT: Additional Info
Right now the robots only have:
RGBDSLAM via Kinect
No initial knowledge of the house or their location (no docks, no mappings/markings, nada)
Can communicate via wifi and that part is open ended
I'm hoping to be able to stitch the scanned rooms together before the robots even meet. Compass + altimeter + gps will get me close but the goal is to be within an inch of accuracy which makes this tough. There IS freedom to add whatever parts to the robots themselves / laptop but the home needs to stay dynamic (robots will be in a different home every time).
|
As far as I know, a hardware real-time robot control system requires a specific computing unit to solve the kinematics and dynamics of a robot such as interval zero RTX, which assigns CPU cores exclusively for the calculation, or a DSP board, which does exactly the same calculation. This configuration makes sure that each calculation is strictly within, maybe, 1 ms.
My understanding is that ROS, which runs under Ubuntu, doesn't have a exclusive
computing unit for that. Kinematics and dynamics run under different threads of the same CPU which operates the Ubuntu system, path plan, and everything else.
My question is that how does ROS achieve software-real time? Does it slow down the sampling time to maybe 100ms and makes sure each calculation can be done in time? Or the sampling time changes at each cycle maybe from 5ms, 18ms, to 39ms each time in order to be as fast as possible and ROS somehow compensates for it at each cycle?
|
Good day,
I am working on an autonomous flight controller for a quadcopter ('X' configuration) using only angles as inputs for the setpoints used in a single loop PID controller running at 200Hz (PID Implementation is Here: Quadcopter PID Controller: Derivative on Measurement / Removing the Derivative Kick). For now I am trying to get the quadcopter to stabilize at a setpoint of 0 degrees. The best I was able to come up with currently is +-5 degrees which is bad for position hold. I first tried using only a PD controller but since the quadcopter is inherently front heavy due to the stereo cameras, no amount of D or P gain is enough to stabilize the system. An example is the image below which I added a very small I gain:
As you can see from the image above (at the second plot), the oscillations occur at a level below zero degrees due to the quadcopter being front heavy. This means that the quad oscillates from the level postion of 0 degrees to and from a negative angle/towards the front. To compensate for this behaviour, I discovered that I can set the DC level at which this oscillations occur using the I gain to reach the setpoint. An image is shown below with [I think] an adequate I gain applied:
I have adjusted the PID gains to reduce the jitters caused by too much P gain and D gain. These are my current settings (Which are two tests with the corresponding footage below):
Test 1: https://youtu.be/8JsraZe6xgM
Test 2: https://youtu.be/ZZTE6VqeRq0
I can't seem to tune the quadcopter to reach the setpoint with at least +-1 degrees of error. I noticed that further increasing the I-gain no longer increases the DC offset.
When do I know if the I-gain I've set is too high? How does it reflect on the plot?
EDIT:
The Perr in the graphs are just the difference of the setpoint and the CF (Complementary Filter) angle.
The Derr plotted is not yet divided by the deltaTime because the execution time is small ~ 0.0047s which will make the other errors P and I hard to see.
The Ierr plotted is the error integrated with time.
All the errors plotted (Perr, Ierr, Derr) are not yet multiplied by the Kp, Ki, and Kd constants
The 3rd plot for each of the images is the response of the quadcopter. The values on the Y axis correspond to the value placed as the input into the gpioPWM() function of the pigpio library. I had mapped using a scope the values such that 113 to 209 pigpio integer input corresponds to 1020 to 2000ms time high of the PWM at 400Hz to the ESC's
EDIT:
Here is my current code implementation with the setpoint of 0 degrees:
cout << "Starting Quadcopter" << endl;
float baseThrottle = 155; //1510ms
float maxThrottle = 180; //This is the current set max throttle for the PITCH YAW and ROLL PID to give allowance to the altitude PWM. 205 is the maximum which is equivalent to 2000ms time high PWM
float baseCompensation = 0; //For the Altitude PID to be implemented later
delay(3000);
float startTime=(float)getTickCount();
deltaTimeInit=(float)getTickCount(); //Starting value for first pass
while(1){
//Read Sensor Data
readGyro(&gyroAngleArray);
readAccelMag(&accelmagAngleArray);
//Time Stamp
//The while loop is used to get a consistent dt for the proper integration to obtain the correct gyroscope angles. I found that with a variable dt, it is impossible to obtain correct angles from the gyroscope.
while( ( ((float)getTickCount()-deltaTimeInit) / ( ((float)getTickFrequency()) ) ) < 0.005){ //0.00209715|0.00419
deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //Get Time Elapsed
cout << " DT endx = " << deltaTime2 << endl;
}
//deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //Get Time Elapsed
deltaTimeInit=(float)getTickCount(); //Start counting time elapsed
cout << " DT end = " << deltaTime2 << endl;
//Complementary Filter
float pitchAngleCF=(alpha)*(pitchAngleCF+gyroAngleArray.Pitch*deltaTime2)+(1-alpha)*(accelmagAngleArray.Pitch);
float rollAngleCF=(alpha)*(rollAngleCF+gyroAngleArray.Roll*deltaTime2)+(1-alpha)*(accelmagAngleArray.Roll);
float yawAngleCF=(alpha)*(yawAngleCF+gyroAngleArray.Yaw*deltaTime2)+(1-alpha)*(accelmagAngleArray.Yaw);
//Calculate Orientation Error (current - target)
float pitchError = pitchAngleCF - pitchTarget;
pitchErrorSum += (pitchError*deltaTime2);
float pitchErrorDiff = pitchError - pitchPrevError;
pitchPrevError = pitchError;
float rollError = rollAngleCF - rollTarget;
rollErrorSum += (rollError*deltaTime2);
float rollErrorDiff = rollError - rollPrevError;
rollPrevError = rollError;
float yawError = yawAngleCF - yawTarget;
yawErrorSum += (yawError*deltaTime2);
float yawErrorDiff = yawError - yawPrevError;
yawPrevError = yawError;
//PID controller list
float pitchPID = pitchKp*pitchError + pitchKi*pitchErrorSum + pitchKd*pitchErrorDiff/deltaTime2;
float rollPID = rollKp*rollError + rollKi*rollErrorSum + rollKd*rollErrorDiff/deltaTime2;
float yawPID = yawKp*yawError + yawKi*yawErrorSum + yawKd*yawErrorDiff/deltaTime2;
//Motor Control - Mixing
//Motor Front Left (1)
float motorPwm1 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation;
//Motor Front Right (2)
float motorPwm2 = -pitchPID - rollPID + yawPID + baseThrottle + baseCompensation;
//Motor Back Left (3)
float motorPwm3 = pitchPID + rollPID + yawPID + baseThrottle + baseCompensation;
//Motor Back Right (4)
float motorPwm4 = pitchPID - rollPID - yawPID + baseThrottle + baseCompensation;
//Check if PWM is Saturating - This method is used to fill then trim the outputs of the pwm that gets fed into the gpioPWM() function to avoid exceeding the earlier set maximum throttle while maintaining the ratios of the 4 motor throttles.
float motorPWM[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4};
float minPWM = motorPWM[0];
int i;
for(i=0; i<4; i++){ // Get minimum PWM for filling
if(motorPWM[i]<minPWM){
minPWM=motorPWM[i];
}
}
cout << " MinPWM = " << minPWM << endl;
if(minPWM<baseThrottle){
float fillPwm=baseThrottle-minPWM; //Get deficiency and use this to fill all 4 motors
cout << " Fill = " << fillPwm << endl;
motorPwm1=motorPwm1+fillPwm;
motorPwm2=motorPwm2+fillPwm;
motorPwm3=motorPwm3+fillPwm;
motorPwm4=motorPwm4+fillPwm;
}
float motorPWM2[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4};
float maxPWM = motorPWM2[0];
for(i=0; i<4; i++){ // Get max PWM for trimming
if(motorPWM2[i]>maxPWM){
maxPWM=motorPWM2[i];
}
}
cout << " MaxPWM = " << maxPWM << endl;
if(maxPWM>maxThrottle){
float trimPwm=maxPWM-maxThrottle; //Get excess and use this to trim all 4 motors
cout << " Trim = " << trimPwm << endl;
motorPwm1=motorPwm1-trimPwm;
motorPwm2=motorPwm2-trimPwm;
motorPwm3=motorPwm3-trimPwm;
motorPwm4=motorPwm4-trimPwm;
}
//PWM Output
gpioPWM(24,motorPwm1); //1
gpioPWM(17,motorPwm2); //2
gpioPWM(22,motorPwm3); //3
gpioPWM(18,motorPwm4); //4
|
I'm building an inverted pendulum to be controlled by DC motors, but I've run across a conundrum. Personal life experience tells me that it's better to have a lower center of mass to maintain balance. On the other hand, the greater the moment of inertia (e.g. the higher the center of mass), the easier it is to maintain balance as well.
These two views both seem plausible, and yet also seem contradictory. For an inverted pendulum, is there an optimal balance between the two perspectives? Or is one absolutely right while the other absolutely wrong? If one is wrong, then where is the error in my thinking?
|
I want to implement my own pose graph SLAM following [1]. Since my vehicle is moving in 3D-space i represent my pose using a 3D-translation vector and a quaternion for orientation. [1] tells me that it's necessary to adapt their algorithm 1 by using manifolds to project the poses into euclidean space.
I also studied the approach of [2]. In section "IV.B. Nonlinear Systems" they write that their approach remains valid for nonlinear systems. I conclude that for their case it's not obligatory to make use of a manifold. But I don't understand how they avoid it. So my questions are:
Is it correct that there is an alternative to manifolds?
If yes, how does this alternative look like?
[1] Grisetti, G., Kummerle, R., Stachniss, C., & Burgard, W. (2010). A tutorial on graph-based SLAM. Intelligent Transportation Systems Magazine, IEEE, 2(4), 31-43.
[2] Kaess, M., Ranganathan, A., & Dellaert, F. (2008). iSAM: Incremental smoothing and mapping. Robotics, IEEE Transactions on, 24(6), 1365-1378.
|
To plot any curve or a function on a paper we need points of that curve, so to draw a curve, I will store a set of points in the processor and use motors, markers and other mechanism to draw straight lines attaching these points and these points are so close to each other that the resultant will look an actual curve.
So I am going to draw the curve with a marker or a pen.
Yes to do this project I need motors which would change the position of a marker but which one?
With my knowledge stepper motor and servo motors are appropriate but not sure whether they are appropriate since I have never used them, so will they work?
The dimension of paper on which I will be working on is 30x30 cms.
I have two ideas for this machine
a. A rectangular one as shown
I would make my marker holder movable with help of rack and pinion mechanism but I am not sure that this would be precise and I may have to alter to some other mechanism and if you know such then that can really help me.
b. A cylindrical one
Here I would roll a paper on this cylinder and this paper will get unrolled as the cylinder rotates and even the marker holder is movable but only in X direction and the rolling of paper is nothing but change of Y position.
Which one of the above two methods is good?
I know about microcontrollers and I want to control the motors using them so I decided to go with Atmega 16 microcontroller. But here i might need microstepping of signals how would I be able to do that with microcontrollers?
If you know the answer to atleast one of the questions then those answers are always welcomed.
If you need any clarifications about these then please leave a comment.
Thankyou for your time.
Your sincerely,
Jasser
Edit : To draw lines of particular slope I would have to know the slope between two points and the depending on the slope I would rotate motors with particular speed so that marker will move in a straight fashion with that slope.
|
I am currently reviewing a path accuracy algorithm. The measured data are points in the 7 dimensional joint space (the robot under test is a 7 axes Robot, but this is not of importance for the question). As far as I know path accuracy is measured and assessed in configuration (3 D) space. Therefore I am wondering if a path accuracy definition in joint angle space has any practical value. Sure, if one looks at the joint angle space as a 7 dimensional vector space in the example (with Euclidean distance measure) one can do formally the math. But this seems very odd to me. For instance, an angle discrepancy between measured and expected for the lowest axis is of much more significance than a discrepancy for the axis near the actuator end effector.
So here is my Question: Can anyone point me to references where path accuracy in joint space and/or algorithms for its calculation is discussed ?
(I am not quite sure what tags to use. Sorry if I misused some.)
|
I've been working on Humanoid Robot, and I face the problem of finding the Center of Mass of the Robot which will help in balancing the biped. Although COM has a very simple definition, I'm unable to find a simple solution to my problem.
My view: I have already solved the Forward and Inverse Kinematics of the Robot with Torso as the base frame. So, if I can find the position(and orientation) of each joint in the base frame, I can average all of them to get the COM. Is this approach reasonable? Will it produce the correct COM?
Can anyone offer any series of steps that I can follow to find the COM of the biped? Any help would be appreciated.
Cheers!
|
I want to get telemetry data in my Raspberry Pi that will be connected to a CC3D board either via USB cable or Serial communication. How can I get the data? I plan to have wifi communication between the Pi and my Laptop. Also OPLink modems will be used both in the Pi and the CC3D for the telemetry. Does anyone have a python example that may help to build an interface or output in the Linux shell to get raw telemetry data in RPi?
|
I am working on a robotics application that involves moving objects (e.g. books) between several (around 10) stacks. To measure the performance, I'd like to be able to measure which book is located on each of the stacks. The order is not important I just want to know if a book is on one of the stacks.
The stacks are separated by at least one meter and the height of the stacks is less than 30cm (< 8 Books).
If have thought of putting an RFID card in every book and fixing RFID readers above (or below) the stack positions. Several readers could be attached via SPI or I2C to some arduinos or RPis.
What to you think about this approach? Is there a simpler way? Could someone maybe recommend a sensor that could solve this problem?
// Update:
I can modify the books (e.g. add a QR-Marker) to some extent, but can't guarantee that the orientation on the stack is fixed.
|
I'm currently a robotics hobbyist and am full fledged in Arduino and I have used the Raspberry Pi to make some robots and PCs. Currently, I am thinking of making my own Raspberry Pi, from scratch, on a breadboard or a PCB or something. I surfed the web quite a bit and I did not get the answer I was hoping for. By making a Pi, I mean like instead of buying an Arduino, I can make one myself by buying the Atmega328, Crystal oscillators, etc. I am asking for this because my school requires me to do a project in which I make a computer or a gaming console or something like that and I would hate to look at the disappointed face of the tester all because I just bought a Pi , an connected some devices to it. Thanks in advance!
|
Suppose I have a DC motor with an arm connected to it (arm length = 10cm, arm weight = 0), motor speed 10rpm.
If I connect a 1Kg weight to the very end of that arm, how much torque is needed for the motor to do a complete 360° spin, provided the motor is placed horizontally and the arm is vertical?
Is there an simple equation where I can input any weight and get the required torque (provided all other factors remain the same)?
|
I'm willing to make my first robot, and I'd like to make one similar to the Sphero.
I know I have to add 2 motors in it, and make it work as a hamster ball, but I don't understand how I can make it rotate on the x axis aswell and not only on the y axis, if we assume that the y one is in front of the robot and the x one on its sides.
Any ideas?
|
This is not really a problem but something strange is going on.
When Create2 is connected to a PC via the USB original connector lead, when you start-up the computer the Create2 is activated by the Baud Rate Change (BRC) pulling to ground. If I understand correctly, normal behaviour.
My Create2 is connected to a XBEE via a buck converter, I added a switch so the buck converter and the XBEE should not drain the battery continuously so as mentioned in the specs.
I followed the Bluetooth pdf for the connections, its working well for sending commands but I still just have a few problems with streaming the return data but that will be resolved.
But now, with the XBEE switched off my Create2 still activates when I start-up my PC, how is that possible, how can the BRC be pulled to ground?
There can be no communication between the PC XB and the Create2 XB since the Create2 XB is switched off, only the PC XB is switched on when starting the computer.
Its not a problem, its just that I am puzzled. Can anyone explain why this is happening?
|
I'm developing a robotic hand, and decided to place motors inside joints (as in picture) and I'm stuck with finding a stepper motor that can fit there. Approximate size of motor body is radius - 10mm, length - 10 mm.
Any suggestions?
|
Given a pose $x_i = (t_i, q_i)$ with translation vector $t_i$ and rotation quaternion $q_i$ and a transform between poses $x_i$ and $x_j$ as $z_{ij} = (t_{ij}, q_{ij})$ I want to compute the error function $e(x_i, x_j) = e_{ij}$, which has to be minimized like this to yield the optimal poses $X^* = \{ x_i \}$:
$$X^* = argmin_X \sum_{ij} e_{ij}^T \Sigma^{-1}_{ij} e_{ij}$$
A naive approach would look like this:
$$ e_{ij} = z_{ij} - f(x_i,x_j) $$
where $z_{ij}$ is the current measurement of the transform between $x_i$ and $x_j$ and $f$ calculates an estimate for the same transform. Thus, $e_{ij}$ simply computes the difference of translations and difference of turning angles:
$$ e_{ij} = \begin{pmatrix} t_{ij} - t_j - t_i \\\ q_{ij} (q_j q_i^{-1})^{-1} \end{pmatrix} $$
Is there anything wrong with this naive approach? Am I missing something?
|
Could someone help me understand the logic behind choosing a particular state space vector for an EKF?
Context: Say there is a 4 wheeled robot that operates only in 2D. It is equipped with an inertial unit (a/g/m) and wheel encoders (I understand that these alone might not satisfy accuracy constraints, but consider this as a hypothetical case).
Now, some literature has the state as [q, x, y, vx, vy]' while a few others as [q, q_dot, x, y, vx, vy]'. My question is, what is the advantage with having certain 'rate terms' as opposed to only the normal parameters? Also, what about including bias terms in there?
How do I go about selecting an appropriate state space vector for any use-case (in general)? Is there a set of intuitive/mathematical steps to consider/follow?
Thanks!
|
I have a robot with 3 rotational joints that I am trying to simulate in a program I am creating. So I have 4 frames, one base frame, and each joint has a frame. I have 3 transformation functions to go from frame 1 or 2 or 3 to frame 0.
By using the transformation matrix, I want to know how much each frame has been rotated (by the X,Y and Z axis) compared with the base frame. Any suggestions?
The reason I want this is because I have made some simple 3D shapes that represent each joint. By using the DH parameters I made my transformation matrices. When ever I change my θ (it does not mater how the θ changes, it just does), I want the whole structure to update. I take the translation from the last column. Now I want to get the rotations.
|
With DC motors, it is common to put a freewheel diode and/or a capacitor in order to protect the equipment as the motor can induce current into the system.
I plan to use this board to control a 24V DC motor with a Arduino-like microcontroler. In an example in their documentation, they don't put such protection, so I wanted to know if it's unsafe, or is it that the board already protects the system?
The example in question:
|
I need to know if the iRobot Create 2 can be controlled with a NI myRIO that has been programmed through LabVIEW.
The goal is to program an autonomous robot for real-time tracking using a Kinect sensor.
|
What is an intuitive understanding for homotopy? At what stage is homotopy (I understand it as stretching or bending of path) in a planning algorithm? Is homotopy involved, for example, while implementing an algorithm like RRT?
|
I'm trying to work a car that's being controlled by an Arduino. I'm using the following chassis: New 2WD car chassis DC gear motor, wheels easy assembly and expansion and an L298N motor driver.
The problem is that it's hard to make the car go straight. Giving the same PWM value to the motors still makes them spin in different speeds, trying to calibrate the value is hard and every time I recharge my batteries the value changes.
What are my options on making the car go straight when I want (well, sometimes I'll want to turn it around of course)?
I've thought about using an encoder but I wish to avoid that since it will complicate the whole project, is there any other viable option? and even when using an encoder, Does it means I will need to keep track all the time and always adjust the motors value continuously? is there some built-in library for that?
|
What?
Put together here a list of books (like the one for C/C++ on StackOverflow) that are spiritually similar to Sebastian Thrun's Probabilistic Robotics for robotic manipulation and mechanics.
Why?
Thrun's book is a wonderful resource for implementable algorithms while also dealing with the mathematics/theory behind them. In somewhat similar vein for robotic mechanics there is "A Mathematical Introduction to Robotic Manipulation - S.Sastry, Z.Li and R.Murray" which has a lot of mathematical/theoretical content. What is missing however in this book are the algorithms concerned with how should/would one go about implementing the theoretical stuff.
Requirements
Ideally list books dealing with diverse areas of robotics.
The books have to present algorithms like what Thrun does in his book.
Algorithms presented have to be language agnostic and as much as possible not be based on packages like MATLAB in which case they should be categorized appropriately.
|
I am working on a 6DOF robot arm project and I have one big question. When I first derived the inverse kinematics (IK) algorithm after decoupling (spherical wrist), I could easily get the equations based on nominal DH values, where alpha are either 0 or 90 degrees and there are many zeros in $a_i$ and $d_i$. However, after kinematics calibration, the identified DH parameters are no longer ideal ones with a certain small, but non-zero, bias added to the nominal values.
So my question is, can the IK algorithm still be used with the actual DH parameters? If yes, definitely there will be end-effector errors in actual operation. If not, how should I change the IK algorithm?
P.S. I am working on a modular robot arm which means the DH bias could be bigger than those of traditional robot arms.
|
In catia .stl format is available only for part file not for assembly file. Please help how to import asembly in simmechanics
.CATProduct to .stl
Or Is there any other way to do?
|
Context: I have an IMU(a/g/m) + Wheel Odometry measurement data that I'm trying to fuse in order to localize a 2D (ackermann drive) robot.
The state vector X = [x y yaw].
I'm using the odometry data to propagate the state through time (no control input).
The update step includes the measurement vector Z = [x_odo y_odo yaw_imu].
I have two questions:
1.Does it make sense to use the odometry data(v_linear, omega) in both the prediction as well as update steps?
2.How do I account for the frequency difference between the odometry data(10Hz) and the imu data(40Hz)? Do I run the filter at the lower frequency, do I dynamically change the matrix sizes or is there any other way?
Thanks!
|
I am programming a robot to drive over variable terrain obstacles autonomously. The variable terrain could potentially knock the robot off of its initial heading, but I would like to design an autonomous sequence to correct for any change in direction. I am using a very accurate sensor with compass and yaw. What is the best way to have it correct for any changes and maintain its heading? Side to side motion does not have to stay perfect, but the heading needs to stay the same.We are currently correcting it by overpowering one side of the wheels (depending on direction of correction needed) until the heading is correct again, but this seems to be a slightly antiquated method, so I'm looking for a cleaner and more smooth method.
|
I have a robot arm in an environment. How can I check for collision between this robot arm and the environment?
|
Good day,
I had been recently reading up more on PID controllers and stumbled upon something called integral wind up. I am currently working on an autonomous quadcopter concentrating at the moment on PID tuning. I noticed that even with the setpoint of zero degrees reached in this video, the quadcopter would still occasionally overshoot a bit: https://youtu.be/XD8WgVFfEsM
Here is the corresponding data testing the roll axis:
I noticed that the I-error does not converge to zero and continues to increase:
Is this the integral wind-up?
What is the most effective way to resolve this?
I have seen many implementations mainly focusing on limiting the output of the system by means of saturation. However I do not see this bringing the integral error eventually back to zero once the system is stable.
Here is my current code implementation with the setpoint of 0 degrees:
cout << "Starting Quadcopter" << endl;
float baseThrottle = 155; //1510ms
float maxThrottle = 180; //This is the current set max throttle for the PITCH YAW and ROLL PID to give allowance to the altitude PWM. 205 is the maximum which is equivalent to 2000ms time high PWM
float baseCompensation = 0; //For the Altitude PID to be implemented later
delay(3000);
float startTime=(float)getTickCount();
deltaTimeInit=(float)getTickCount(); //Starting value for first pass
while(1){
//Read Sensor Data
readGyro(&gyroAngleArray);
readAccelMag(&accelmagAngleArray);
//Time Stamp
//The while loop is used to get a consistent dt for the proper integration to obtain the correct gyroscope angles. I found that with a variable dt, it is impossible to obtain correct angles from the gyroscope.
while( ( ((float)getTickCount()-deltaTimeInit) / ( ((float)getTickFrequency()) ) ) < 0.005){ //0.00209715|0.00419
deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //Get Time Elapsed
cout << " DT endx = " << deltaTime2 << endl;
}
//deltaTime2=((float)getTickCount()-deltaTimeInit)/(((float)getTickFrequency())); //Get Time Elapsed
deltaTimeInit=(float)getTickCount(); //Start counting time elapsed
cout << " DT end = " << deltaTime2 << endl;
//Complementary Filter
float pitchAngleCF=(alpha)*(pitchAngleCF+gyroAngleArray.Pitch*deltaTime2)+(1-alpha)*(accelmagAngleArray.Pitch);
float rollAngleCF=(alpha)*(rollAngleCF+gyroAngleArray.Roll*deltaTime2)+(1-alpha)*(accelmagAngleArray.Roll);
float yawAngleCF=(alpha)*(yawAngleCF+gyroAngleArray.Yaw*deltaTime2)+(1-alpha)*(accelmagAngleArray.Yaw);
//Calculate Orientation Error (current - target)
float pitchError = pitchAngleCF - pitchTarget;
pitchErrorSum += (pitchError*deltaTime2);
float pitchErrorDiff = pitchError - pitchPrevError;
pitchPrevError = pitchError;
float rollError = rollAngleCF - rollTarget;
rollErrorSum += (rollError*deltaTime2);
float rollErrorDiff = rollError - rollPrevError;
rollPrevError = rollError;
float yawError = yawAngleCF - yawTarget;
yawErrorSum += (yawError*deltaTime2);
float yawErrorDiff = yawError - yawPrevError;
yawPrevError = yawError;
//PID controller list
float pitchPID = pitchKp*pitchError + pitchKi*pitchErrorSum + pitchKd*pitchErrorDiff/deltaTime2;
float rollPID = rollKp*rollError + rollKi*rollErrorSum + rollKd*rollErrorDiff/deltaTime2;
float yawPID = yawKp*yawError + yawKi*yawErrorSum + yawKd*yawErrorDiff/deltaTime2;
//Motor Control - Mixing
//Motor Front Left (1)
float motorPwm1 = -pitchPID + rollPID - yawPID + baseThrottle + baseCompensation;
//Motor Front Right (2)
float motorPwm2 = -pitchPID - rollPID + yawPID + baseThrottle + baseCompensation;
//Motor Back Left (3)
float motorPwm3 = pitchPID + rollPID + yawPID + baseThrottle + baseCompensation;
//Motor Back Right (4)
float motorPwm4 = pitchPID - rollPID - yawPID + baseThrottle + baseCompensation;
//Check if PWM is Saturating - This method is used to fill then trim the outputs of the pwm that gets fed into the gpioPWM() function to avoid exceeding the earlier set maximum throttle while maintaining the ratios of the 4 motor throttles.
float motorPWM[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4};
float minPWM = motorPWM[0];
int i;
for(i=0; i<4; i++){ // Get minimum PWM for filling
if(motorPWM[i]<minPWM){
minPWM=motorPWM[i];
}
}
cout << " MinPWM = " << minPWM << endl;
if(minPWM<baseThrottle){
float fillPwm=baseThrottle-minPWM; //Get deficiency and use this to fill all 4 motors
cout << " Fill = " << fillPwm << endl;
motorPwm1=motorPwm1+fillPwm;
motorPwm2=motorPwm2+fillPwm;
motorPwm3=motorPwm3+fillPwm;
motorPwm4=motorPwm4+fillPwm;
}
float motorPWM2[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4};
float maxPWM = motorPWM2[0];
for(i=0; i<4; i++){ // Get max PWM for trimming
if(motorPWM2[i]>maxPWM){
maxPWM=motorPWM2[i];
}
}
cout << " MaxPWM = " << maxPWM << endl;
if(maxPWM>maxThrottle){
float trimPwm=maxPWM-maxThrottle; //Get excess and use this to trim all 4 motors
cout << " Trim = " << trimPwm << endl;
motorPwm1=motorPwm1-trimPwm;
motorPwm2=motorPwm2-trimPwm;
motorPwm3=motorPwm3-trimPwm;
motorPwm4=motorPwm4-trimPwm;
}
//PWM Output
gpioPWM(24,motorPwm1); //1
gpioPWM(17,motorPwm2); //2
gpioPWM(22,motorPwm3); //3
gpioPWM(18,motorPwm4); //4
|
Context: I am working with the SFU Mountain Dataset [http://autonomylab.org/sfu-mountain-dataset/]
The UGV image - via the SFU Mountain Dataset website:
I have used the following state update equations (Husky A200 - differential drive)
State Update - from Prob. Robotics, Thrun et. al [x' y' theta'] represent the state at the next time step
After plotting the x and y positions based on just the wheel encoder data (v_fwd and w -> the dataset provides these directly, instead on the vr and vl), the curve seems to be quite weird and unexpected.
Wheel Odometry Data - http://autolab.cmpt.sfu.ca/files/datasets/sfu-mountain-workshop-version/sfu-mountain-torrent/encoder-dry-a.tgz
Blue - Wheel Odom | Red - GPS
Actual path!
Question: Is the above curve expected (considering the inaccuracy of wheel odometry) or is there something I'm missing? If the wheel encoder data is that bad, will an EKF (odom + imu) even work?
PS: I'm not worried about the EKF (update step) just as yet. What concerns me more is the horrible wheel odometry data.
|
What kind of sensor I can use to identify which fruit it is (like mango or apple). Moreover, is there any sensor to identify different varieties of apples or mangoes.
|
I am trying to make custom parts that fit directly onto a servo. Doing this has proved more difficult than I've expected so far.
I was hoping to avoid incorporating the provided servo horns into the 3D printed part, so I've been trying this method out. Below are images of my current test - a 3D printed attachment to the servo, with an indentation for an M3 nut (the servo accepts an M3 bolt) for attachment to the servo. The plastic ring doesn't have the spline (I can't print that level of detail I think) but is tight around it. The top piece attaches to a 3/8" nut for use with the 3/8" threaded rod I had laying around.
So far, I'm having difficulty of this setup working with any level for torque and not just spinning in place.
So... is this the correct approach? Am I going to have to design a piece with the servo horn inside of it to get the servo to connect? Are there better approaches I haven't considered?
|
I've recently come across the concept of using information gain (or mutual information criteria) as a metric for minimizing entropy on a map to aid in robotic exploration. I have somewhat of a basic question about it.
A lot of papers that talk about minimizing entropy consider an example case of something like a laser scanner and try to compute the 'next best pose' so that the maximum entropy reduction is achieved. Usually this is mentioned like "information gain based approaches help finding the best spot to move the robot such that the most entropy is minimized using raycasting techniques, as opposed to frontier based exploration which is greedy" etc. But I don't understand what the underlying reason is for information gain/entropy based exploration being better.
Let's say a robot in a room with three walls and open space in front. Because of range limitations, it can only see two walls: so in frontier based exploration, the robot has two choices; move towards the third wall and realize it's an obstacle, or move towards the open space and keep going. How does an information gain based method magically pick the open space frontier over the wall frontier? When we have no idea what's beyond our frontiers, how can raycasting even help?
|
Prediction of new landmarks are commonly expressed as:
Xm = Xr + r*cos(phi + theta_r),
Ym = Yr + r*sin(phi + theta_r)
However this is only true for point landmarks. What if I am extracting line feature?
|
I've read some papers for controlling nonlinear systems (e.g. nonlinear pendulum). There are several approaches for targeting nonlinear systems. The most common ones are feedback linearizaing, backstepping, and sliding mode controllers.
In my case, I've done the theoretical and practical parts of controlling nonlinear model of a simple pendulum plus other manipulators problems in C++. For the pendulum, I've utilized a backstepping controller for solving the tracking task for the angular displacement and velocity. The results are
$$
\ddot{\theta} + (k/m) \dot{\theta} + (g/L) \sin\theta= u
$$
where $m=0.5, k=0.0001, L=.2$ and $g=9.81$.
The results are good. However, tuning the controller is time consuming. The majority of papers use genetic algorithms for tuning their controllers such as PD, PID, and backstepping controllers. I'm clueless in this field and I hope someone sheds some light on this concept, preferable if there is a MATLAB sample for at least controlling a simple pendulum.
So far I've designed a simple GUI in C++/Qt in order to tune the controller manually. In the below picture, the response of the controller for step function.
|
The goal is to have a non-invasive flow meter that I can clamp over hydraulic lines.
As a student of hydraulics, I ended up looking and poking around for a good way to make an ultrasonic flow sensor with Arduino and possibly the hc-sr04. Not married to either idea.
So, I admit, I know nothing, but is it possible to do this?
Is there an easier way?
|
I have created a robot model in solidworks and exported in solidworks to urdf plug-in. When exporting the co-ordinates of the model is misaligned which is causing problem while using in ROS.
As you could see in picture the Z-axis is horizontal in assembly whereas vertical in solidworks. How to align these co-ordinates. The generated co-ordinate system must be similar to solidworks' co-ordinates
PS: I have mated the assembly origin and base_link origin
|
I am trying to implement the Vector Field Histogram as described by Borenstein, Koren, 1991 in Python 2.7 using the SciPy stack.
I have already been able to calculate the polar histogram, as described in the paper, as well as the smoothing function to eliminate noise. This variable is stored in a numpy array, named self.Hist.
However, the function computeTheta, pasted below, which computes the steering direction, is only able to compute the proper direction if the valleys (i.e. consecutive sectors in the polar histogram whose obstacle density is below a certain threshold) do not contain the section where a full circle is completed, i.e. the sector corresponding to 360º.
To make things clearer, consider these two examples:
If the histogram contains a peak in the angles between, say, 330º and 30º, with the rest of the histogram being a valley, then the steering direction will be computed correctly.
If, however, the peak is contained between, say, 30º and 60º, then the valley will start at 60º, go all the way past 360º and end in 30º, and the steering direction will be computed incorrectly, since this single valley will be considered two valleys, one between 0º and 30º, and another between 60º and 360º.
def computeTheta(self, goal):
thrs = 2.
s_max = 18
#We start by calculating the sector corresponding to the direction of the target.
target_sector = int((180./np.pi)*np.arctan2(goal[1] - self.VCP[1], goal[0] - self.VCP[0]))
if target_sector < 0:
target_sector += 360
target_sector /= 5
#Next, we assume there is no best sector.
best_sector = -1
dist_best_and_target = abs(target_sector - best_sector)
#Then, we find the sector within a valley that is closest to the target sector.
for k in range(self.Hist.shape[0]):
if self.Hist[k] < thrs and abs(target_sector - k) < dist_best_and_target:
best_sector = k
dist_best_and_target = abs(target_sector - k)
#If the sector is still -1, we return it as an error.
print (target_sector, best_sector)
if best_sector == -1:
return -1
#If not, we can proceed...
elif best_sector > -1:
#... by deciding whether the valley to which the best sector belongs is a "wide" or a "narrow" one.
#Assume it's wide.
type_of_valley = "Wide"
#If we find a sector that contradicts our assumption, we change our minds.
for sector in range(best_sector, best_sector + s_max + 1):
if sector < self.Hist.shape[0]:
if self.Hist[sector] > thrs:
type_of_valley = "Narrow"
#If it is indeed a wide valley, we return the angle corresponding to the sector (k_n + s_max)/2.
if type_of_valley == "Wide":
theta = 5*(best_sector + s_max)/2
return theta
#Otherwise, we find the far border of the valley and return the angle corresponding to the mean value between the best sector and the far border.
elif type_of_valley == "Narrow":
for sector in range(best_sector, best_sector + s_max):
if self.Hist[sector] < thrs:
far_border = sector
theta = 5*(best_sector + far_border)/2
return theta
How can I address this issue? Is there a way to treat the histogram as circular? Is there maybe a better way to write this function?
Thank you for your time.
|
I've hacked a rc helicopter, and I am able to control it by running a program on my computer. I am interested in writing algorithms that will stabilize the helicopter. For instance, the helicopter is hovering, and then if it is shoved off balance it can return to its previous position in a stable state. Any help on an algorithm would be awesome.
|
My iRobot Create is playing a tune about every 30 seconds and continuously flashing a red light when I attempt to charge it. What is the issue?
|
I am working with a sampling based planning library. When I looked into the implementation, I found for kinematic car a SE2 state space(x, y, yaw), for dynamic car a SE2 compound state space (space allowing composition of state spaces), for blimp and quadrotor a SE3 compound state space was used. I could understand the design of SE2 and SE3 state spaces but the compound state spaces of dynamic car, blimp and quadrotor I could not comprehend or differentiate.
What is the difference in terms of state space for motion planning for kinematic car, dynamic car, blimp, and quadrotor?
|
I have to program an autonomous bot (using an ATmega2560). It has a 4-digit seven segment display attached to it. I have to make the bot traverse through arena while continuously displaying the time in seconds on the seven segment display.
I can't use the code to display on seven segment display in my main() function.
Any help?
|
I have a few robotic manipulators from NakkaNippon Electric and I'm trying to communicate with them using RS232 without success. The robot model are microrobots 88-4 or 88-5.
I'm sending commands via COM port, but I can't received anything from the box. I'm using a USB-to-DB9 converter (FTDI) with a DB9-DB25 cable.
On the net, the only reference I have for the robot is from an old post from year 2000 that was from user @peterkneale.
If it can help, here is the link to the scanned PDF manual.
You can see the commands on page 23-24 of the pdf (page 21-22 in document).
Any advice would be grateful
|
I'm trying to determine the wind flow diagram around a quadcopter when it is in action. I looked up on internet but couldn't find any reliable source.
By wind flow diagram, what I mean is when my quadcopter is in mid-air, hovering at some fixed position, how the air is moving around it? All the directions are needed to be kept in mind, from top to bottom (vertical direction) and also the horizontal direction.
|
Is there a way to charge a Li-Po battery using solar panels to increase the flight time of a quadcopter during its flight?
|
I am working on the Baxter robot where I have a first arm configuration and a bunch of other arm configurations, where I want to find the closest arm configuration to the first among the many other arm configurations. The trick here is that the end effector location/orientation is the exact same for all the arm configurations, they are just different ik solutions. Can anyone point me towards the right direction towards this? Thank you.
|
Background: I'm using the L3GD20H MEMS gyroscope with an Arduino through a library (Pololu L3G) that in turn relies on interrupt-driven I2C (Wire.h); I'd like to be able to handle each new reading from the sensor to update the calculated angle in the background using the data ready line (DRDY). Currently, I poll the STATUS register's ZYXDA bit (which is what the DRDY line outputs) as needed.
General question: With some digital output sensors (I2C, SPI, etc.), their datasheets and application notes describe using a separate (out-of-band) hardware line to interrupt the microcontroller and have it handle new sets of data. But on many microcontrollers, retrieving data (let alone clearing the flag raising the interrupt line) requires using the normally interrupt-driven I2C subsystem of a microcontroller. How can new sensor data be retrieved from the ISR for the interrupt line when also using the I2C subsystem in an interrupt-driven manner?
Possible workarounds:
Use nested interrupts (as @hauptmech mentioned): re-enable I2C interrupt inside of ISR. Isn't this approach discouraged?
Use non-interrupt-driven I2C (polling)--supposedly a dangerous approach inside of ISRs. The sensor library used depends on the interrupt-driven Wire library.
[Edit: professors' suggestion] Use a timer to interrupt set to the sample rate of the sensor (which is settable and constant, although we measure it to be e.g. 183.3Hz rather than 189.4Hz per the datasheet). Handling the I2C transaction still requires re-enabling interrupts, i.e. nested interrupts or performing I2C reads from the main program.
[Edit:] Here's a comment I found elsewhere on a similar issue that led me to believe that the hang encountered was from I2C reads failing inside an interrupt handler: https://www.sparkfun.com/tutorials/326#comment-4f4430c9ce395fc40d000000
…during the ISR (Interrupt Service Routine) I was trying to read the
device to determine which bit changed. Bad idea, this chip uses the
I2C communications which require interrupts, but interrupts are turned
off during an ISR and everything goes kinda south.
|
I need some help here because I can't figure how the Unscented Kalman Filter works.
I've searched for examples but all of them are too hard to understand.
Please someone can explain how it works step by step with a trivial example like position estimation, sensor fusion or something else?
|
I am not sure if this is the best place to ask this question, but hopefully someone here can give me some advice. I have a device hooked up to a data acquisition system that can provide sync out signal and record sync in signals. I need to synchronize my recordings with this device to a video feed. I am having trouble finding a camera that can provide a sync signal or any other good way to accomplish this. Thanks for your help.
|
I have been working with KUKA LBR iiwa 7 R800 robot, with the KUKA's IDE, which is the 'Sunrise.Workbench'. Since it does not have any virtual platform to verify the code (simulate), it's been quite difficult, as I need to test each code by deploying to the robot.
Can anyone suggest if there is any simulation software available where I can test the code written using the Robotics API in Sunrise.Workbench?
I came across V-REP simulation software, but, not sure if I can use my code in the workbench platform.
Appreciate if anyone can shed some light on it.
Thanks in advance.
|
The MATLAB Compiler SDK allows to create a wrapper for a MATLAB function which can be accessed by Java software. Based on my understanding, KUKA's Sunrise.Workbench IDE uses most of the standard Java functions.
I was trying to read the package generated using the MATLAB Compiler SDK (the new version of MATLAB Builder JA) into the workbench platform. I could successfully read the package into Eclipse IDE, but not into Workbench.
The reason for using the Compiler SDK is that, I have some functions is MATLAB, and I want to use the same in the workbench programming.
Does anyone have experience with the same? Appreciate any help.
|
Has anyone ever run into a case where a fresh install of ROS cannot run its tutorial packages?
I am running ROS Indigo on an nVidia Jetson TK1, using the nVidia-supplied Ubuntu image. I just did a fresh install, Ubuntu and ROS, just to keep things clean for this project. I am building a kind of 'demo-bot' for some students I will be teaching; it will use both the demo files and some of my own code. Now, after setting things up, I try to run the talker tutorial just to check to make sure that everything is running, and rospack is pretty sure that the tutorials don't exist.
For example, inputting this into the terminal
rosrun rospy_tutorials talker
Outputs
[rospack] Error: package 'rospy_tutorials' not found
This is the case for every tutorial file; python and C++. Now, I am sure the tutorials are installed. I am looking right at them in the file system, installed from the latest versions on github. So I think it is something on ROS' side of things.
Has anyone ever bumped into something similar before, where a ROS package that supposedly was installed correctly isn't found by ROS itself? I would rather not have to reinstall again if I can avoid it.
EDIT #1
After playing with it some more, I discovered that multiple packages were not running. All of them - some turtlebot code, and some of my own packages - returned the same error as above. So I suspect something got messed up during the install of ROS.
roswtf was able to run, but it did not detect any problems. However, going forward.
EDIT #2
I double checked the bashrc file. One export was missing, for ROS directory I was trying to work within. Adding it did not solve the problem.
I am still looking for a solution, that hopefully does not involve reflashing the TK1.
EDIT #3
Alright, so I've been poking at this for a few days now and pretty much gave up trying to get ROS to work correctly, and decided a re-flash was necessary. But I think I found something when I booted up my host machine. In my downloads folder, I have the v2.0 and the v1.2 JetPack. I know I used the v2.0 for this latest install, and it has been the only time I have used it (it provides some useful updates for OpenCV and bug fixes, among other things). I'm going to re-flash using the v1.2 JetPack this time, see if things behave better with ROS under that version. Its a long shot, but it is all I have to work with at the moment, and it shouldn't lose any ROS capabilities (aside from some of the stuff I wanted to do with OpenCV). I'll update everyone if that seems to work.
EDIT #4
Ok, everything seems to be working now. The problem does seem to be an issue with Jetpack v2.0. I suspect that some change, somewhere between v1.2 and v2.0 (made to accommodate the new TX1 board), messes with running ROS indigo on a TK1. I'm going to be a more detailed explanation in an answer to this question.
|
I am designing a new mechanism similar to robot arm. It would be a 6 or 7 axis with arms but not the same with traditional articulated arms. As a result, new DH matrix,and inverse kinematics involve. I would like to consult the robot professionals in this forum that do you suggest any simulation tool of this mechanism?
I plan to start with start and end points. Then I will do a trapezoid velocity plan and take sample points with sampling time along the path. After that, I would like to transfer these sampling points to motor joints by DH matrix and inverse kinematics. Finally I would do some basic 3D animation to visualize the movement temporally. I do not plan to simulate controller behavior because in my application motor drivers deal with it. I only need to focus on sending reasonable commands to motor drivers.
In my opinion, Matlab, octave, VC++, and some third-party tools are candidates. Starting from ground zero would be a time-consuming work. I would appreciate if any experts can share a tool or open source code from his or her experience. I did some search on Matlab robotics toolbox but I am not sure if it fits my need because it is expensive and optimized for ROS. In Octave there are also some robotics toolbox but I am not sure about what it can do and what it cannot.
|
Good day,
I am currently creating an autonomous quadcopter using a cascading PID controller specifically a P-PID controller using angle as setpoints for the outer loop and angular velocities for the inner loop. I have just finished tuning the Roll PID last week with only +-5 degrees of error however it is very stable and is able to withstand disturbances by hand. I was able to tune it quickly on two nights however the pitch axis is a different story.
Introduction to the Problem:
The pitch is asymmetrical in weight (front heavy due to the stereo vision cameras placed in front). I have tried to move the battery backwards to compensate however due to the constraints of the DJI F450 frame it is still front heavy.
In a PID controller for an asymmetrical quadcopter, the I-gain is responsible for compensating as it is the one able to "remember" the accumulating error.
Problem at Hand
I saw that while tuning the pitch gains, I could not tune it further due to irregular oscillations which made it hard for me to pinpoint whether this is due to too high P, I or D gain. The quadcopter pitch PID settings are currently at Prate=0.0475 Irate=0.03 Drate=0.000180 Pstab=3 giving an error from the angle setpoint of 15degrees of +-10degrees. Here is the data with the corresponding video.
RATE Kp = 0.0475, Ki = 0.03, Kd = 0.000180 STAB Kp=3
Video: https://youtu.be/NmbldHrzp3E
Plot:
Analysis of Results
It can be seen that the controller is saturating.
The motor controller is currently set to limit the pwm pulse used to control the ESC throttle to only 1800ms or 180 in the code (The maximum is 2000ms or 205) with the minimum set at 155 or 1115ms (enough for the quad to lift itselft up and feel weightless). I did this to make room for tuning the altitude/height PID controller while maintaining the throttle ratio of the 4 motors from their PID controllers.
Is there something wrong on my implementation of limiting the maximum throttle?
Here is the implementation:
//Check if PWM is Saturating - This method is used to fill then trim the outputs of the pwm that gets fed into the gpioPWM() function to avoid exceeding the earlier set maximum throttle while maintaining the ratios of the 4 motor throttles.
float motorPWM[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4};
float minPWM = motorPWM[0];
int i;
for(i=0; i<4; i++){ // Get minimum PWM for filling
if(motorPWM[i]<minPWM){
minPWM=motorPWM[i];
}
}
cout << " MinPWM = " << minPWM << endl;
if(minPWM<baseThrottle){
float fillPwm=baseThrottle-minPWM; //Get deficiency and use this to fill all 4 motors
cout << " Fill = " << fillPwm << endl;
motorPwm1=motorPwm1+fillPwm;
motorPwm2=motorPwm2+fillPwm;
motorPwm3=motorPwm3+fillPwm;
motorPwm4=motorPwm4+fillPwm;
}
float motorPWM2[4] = {motorPwm1, motorPwm2, motorPwm3, motorPwm4};
float maxPWM = motorPWM2[0];
for(i=0; i<4; i++){ // Get max PWM for trimming
if(motorPWM2[i]>maxPWM){
maxPWM=motorPWM2[i];
}
}
cout << " MaxPWM = " << maxPWM << endl;
if(maxPWM>maxThrottle){
float trimPwm=maxPWM-maxThrottle; //Get excess and use this to trim all 4 motors
cout << " Trim = " << trimPwm << endl;
motorPwm1=motorPwm1-trimPwm;
motorPwm2=motorPwm2-trimPwm;
motorPwm3=motorPwm3-trimPwm;
motorPwm4=motorPwm4-trimPwm;
}
Possible solution
I have two possible solutions in mind
I could redesign the camera mount to be lighter by 20-30 grams. to be less front heavy
I could increase the maximum throttle but possibly leaving less room for the altitude/throttle control.
Does anyone know the optimum solution for this problem?
Additional information
The quadcopter weighs about 1.35kg and the motor/esc set from DJI (e310) is rated up to 2.5kgs with the recommended thrust per motor at 350g (1.4kg). Though a real world test here showed that it is capable at 400g per motor with a setup weighing at 1600g take-off weight
How I tune the roll PID gains
I had set first the Rate PID gains. at a setpoint of zero dps
Set all gains to zero.
Increase P gain until response of the system to disturbances is in steady oscillation.
Increase D gain to remove the oscillations.
Increase I gain to correct long term errors or to bring oscillations to a setpoint (DC gain).
Repeat until desired system response is achieved
When I was using the single loop pid controller. I checked the data plots during testing and make adjustments such as increasing Kd to minimize oscillations and increasing Ki to bring the oscillations to a setpoint. I do a similar process with the cascaded PID controller.
The reason why the rate PID are small because rate Kp set at 0.1 with the other gains at zero already started to oscillate wildy (a characteristic of a too high P gain). https://youtu.be/SCd0HDA0FtY
I had set the Rate pid's such that it would maintain the angle I physically placed it to (setpoint at 0 degrees per second)
I then used only P gain at the outer loop stabilize PID to translate the angle setpoint to velocity setpoint to be used to control the rate PID controller.
Here is the roll axis at 15 degrees set point https://youtu.be/VOAA4ctC5RU
Rate Kp = 0.07, Ki = 0.035, Kd = 0.0002 and Stabilize Kp = 2
It is very stable however the reaction time/rise time is too slow as evident in the video.
|
I recently bought a set of escs, brushless outrunner motors and propellers. I'm trying to perform a calibration on the esc, but I can't find how I can do that without using components other than the arduino uno itself. The setup I've managed to make is the one shown in the picture. The escs are a mystery, as there is no manual to be found. If it helps, the buy link is this : http://www.ebay.co.uk/itm/4x-A2212-1000KV-Outrunner-Motor-4x-HP-30A-ESC-4x-1045-prop-B-Quad-Rotor-/111282436897
There might also be a problem with the battery (LiPo 3.7V, 2500mAh).
Can andybody figure out what I'm doing wrong?
The sample arduino code I found was this:
#include <Servo.h>
#define MAX_SIGNAL 2000
#define MIN_SIGNAL 700
#define MOTOR_PIN 9
Servo motor;
void setup() {
Serial.begin(9600);
Serial.println("Program begin...");
Serial.println("This program will calibrate the ESC.");
motor.attach(MOTOR_PIN);
Serial.println("Now writing maximum output.");
Serial.println("Turn on power source, then wait 2 seconds and press any key.");
motor.writeMicroseconds(MAX_SIGNAL);
// Wait for input
while (!Serial.available());
Serial.read();
// Send min output
Serial.println("Sending minimum output");
motor.writeMicroseconds(MIN_SIGNAL);
}
void loop() {
}
|
I am working on a micro dispensing system, using syringe pump. The design involves a syringe on top to be moved by stepper motor. There would be one liquid reservoir form which the syringe would pull liquid from, and push it to eject liquid from other end.
When we pull the syringe, the liquid is sucked into the syringe, while the other opening is shut. When the syringe is pushed, the liquid is ejected from the other end.
The quantity of liquid to be dispensed would be very small (400mg) so i am using small syringe of 1 or 2 ml .. as per my measurement, after every 100 dispensing operations, 1 ml syringe would be empty and we would need to pull liquid from the reservoir into the syringe, and do the dispensing again.
My question is, I am unsure about the check valve here. Is there a 'Single' check valve available which would allow this kind of flow to happen ?
|
I want to build a low cost robot, running ROS for educational purposes. It can be a simple line follower using raspberry pi and an IR sensor. Is it overambitious as a beginner project? How difficult is it to make ROS run on custom hardware?
P.S. I am newbie in both robotics and programming and I am more interested in building actual robots than running simulations. Also, I cant afford to buy ROS compatible robots.
|
I started to use ROS hydro (Robot Operating System) on ubuntu, using the simulator "Gazebo" and roscpp library, in order to program some robots.
In case of pick up and place known objects by robots, what are the topics of object perception for pr2 in ROS??
|
Some questions about this, my friends and I argued with this problem.
Are operational space and joint space dependent on each other?
I know that $x_e$ (end effector's pos.) and $q$ (joint var.) can be expressed by an equation with non-linear function $k$:
$x_e = k(q)$
But I don't think that it tells us operational space and joint space are dependent.
|
I started to use ROS hydro (Robot Operating System) and roscpp.
I tested some examples to move the gripper of pr2 in Gazebo (especially the code in : http://wiki.ros.org/pr2_gripper_sensor_action/Tutorials/Grab%20and%20Release%20an%20Object%20Using%20pr2_gripper_sensor_action ) with catkin package.
I launch : roslaunch pr2_gazebo pr2_empty_world.launch
and when I run the node of code with : rosrun pack_name node_name, I get :: Waiting for the r_gripper_sensor_controller/gripper_action action server to come up ... Waiting for the r_gripper_sensor_controller/gripper_action action server to come up ...
I want to know the cause of those lines in order to see the results. what should I do??
it is notable that when I launch : roslaunch pr2_gripper_sensor_action pr2_gripper_sensor_actions.launch
in the previous link, I get :
[pr2_gripper_sensor_actions.launch] is neither a launch file in package [pr2_gripper_sensor_action] nor is [pr2_gripper_sensor_action] a launch file name
|
I have to know where a multi-rotor is, in a rectangular room, via 6 lasers, 2 on each axis.
The problem is like this:
Inputs :
Room : square => 10 meters by 10 meters
6 positions of the lasers : Fixed on the frame
6 orientations of the lasers : Fixed on the frame
The 6 measurements of the lasers
The quaternion from the IMU of my flight controller (PixHawk).
The origin is centered on the gravity center of the multi-rotor and defined as if the walls are perpendicular to each axes (the normal of the wall in X is (-1,0,0))
Output :
Position in 3D (X,Y,Z)
Angular position (quaternion)
Since I got the angular position of the multi-rotor, I rotated the laser positions and orientations via the quaternion, then extrapolate via the 6 measurements and I got the 3 walls. (orientations of the walls are trivial, then only one point is enough to determine its position.
Badly, I noticed that the yaw (rotation about z) measurement from the PixHawk is unreliable. Then I should measure the yaw from the lasers, but I do not success to do it. Event if the 2D problem is easy, I am lost in 3D.
Does someone know if it [Algorithm to know XYZ position and quaternion from 6 measurments] exists somewhere ? Or what is the right way to go on this problem ?
The question : How could I get the yaw from 2 measurements from 2 lasers which I know the original position, orientation and the pitch and roll.
NOTE : Green pointers are the origin position, Red pointers are the "final" position, but could be rotated around the red circle (due to yaw).
|
UPDATE
I have aded 50 bounty for this question on the StackOverflow
I am trying to implement object tracking from the camera(just one camera, no Z info). Camera has 720*1280 resolution, but I usually rescale it to 360*640 for faster processing.
This tracking is done from the robots camera and I want a system which would be as robust as possible.
I will list what I did so far and what were the results.
I tried to do colour tracking, I would convert image to hsv colour space, do thresholding, some morphological transformations and then find the object with the biggest area. This approach made a fair tracking of the object, unless there are no other object with the same colour. As I was looking for the max and if there are any other objects bigger than the one I need, robot would go towards the bigger one
Then, I decided to track circled objects of the specific colour. However, it was difficult to find under different angles
Then, I decided to track square objects of specific colour. I used this
// Approximate contour with accuracy proportional
// to the contour perimeter
cv::approxPolyDP(
cv::Mat(contours[i]),
approx,
cv::arcLength(cv::Mat(contours[i]), true) * 0.02,
true
);
and then I checked this condition
if (approx.size() >= 4 && approx.size() <= 6)
and afterwards I checked for
solidity > 0.85 and aspect ratio between 0.85 and 1.15
But still result is not as robust as I would expect, especially the size. If there are several squares it would not find the needed one.
So, now I need some suggestions on what other features of the object could I use to improve tracking and how? As I mentioned above several times, one of the main problems is size. And I know the size of the object. However, I am not sure how I can make use of it, because I do not know the distance of the object from the camera and that is why I am not sure how to represent its size in pixel representation so that I can eliminate any other blobs that do not fall into that range.
UPDATE
In the third step, I described how I am going to detect squares with specific colour. Below are the examples of what I am getting.
I used this HSV range for the red colour:
Scalar(250, 129, 0), Scalar(255, 255, 255), params to OpenCV's inRange function
HMIN = 250, HMAX = 255; SMIN = 129, SMAX = 255; VMIN = 0, VMAX = 255;
(Would like to see your suggestions on tweaking this values as well)
So, in this picture you can see the processing; gaussian blurring (5*5),
morphological closing two times (5*5). And the image with the label "result" shows the tracked object (please look at the green square).
On the second frame, you can see that it cannot detect the "red square". The only main difference between these two pics is that I bended down the lid of the laptop (please look closer if you cannot notice). I suppose this happens because of the illumination, and this causes the thresholding to give not desired results.
The only way, I can think of is doing two separate processing on the image. First, to do thresholding based on the colour as I was doing above. Then if I find the object to move to the next frame. If not to use this opencv's find squares method.
However, this method will involve doing too much of processing of the image.
|
What action cost should be used to get a smooth path? Like we use distance traversed to get the shortest path. Will the cost to get a smooth path will be something related to rate of change of slope of the path?
|
I'm a CS student and I need to give a 30-minute lecture about 1-2 papers describing 1-2 algorithms for any of the main problems in Robotics (navigation, coverage, patrolling, etc.).
I have no background in Robotics specifically, but I did take classes such as Algorithms and AI (including some basic AI algorithms such as A*, IDS, UCS, and subjects such as decision trees, game theory, etc.).
The difference between simply describing one of the above is that I need the paper to refer to actual physical robots and their algorithms, with real problems in the physical world, as opposed to AI "agents" with more theoretical algorithms.
I am required to lecture on 1-2 academic papers, published from 2012 onward, with a "respectable" amount of citations. Any suggestions of such papers would be greatly appreciated!
|
Let's say I have a 6-DOF flying camera and I want to make it move through a circular tube autonomously and let's suppose that the camera and the system that makes it fly are considered to be just a point in space. Which feature of the image I get from the camera can I use to move the camera appropriately, that is to get in one end of the tube and get out from the other?
For example, I thought I could use edge detection. As the camera moves forward through the tube, due to the fact that its far plane is not infinitely away, there is a dark circle forming where the camera sees nothing surrounded by the walls of the tube. I think that "preserving" this circle might be the way to go (for example if it becomes an ellipse I have to move the camera accordingly for it to become a circle again), but what are the features that will help me "preserve" the circle?
I would like to use image-based visual servoing to do that. However, what troubles me is the following. In most visual-servoing applications I have seen, the control objective is to make some features "look" in a certain way from the camera point of view. For example, we have the projections of 4 points and we want the camera to move accordingly so that the projections' coordinates have some specific values. But the features are actually the same.
In my case I thought that for example I could say that I want the projections of the 4 "edge points" of the circle/ellipse to take specific values so that they define a circle centered at the fov of the camera. But if the camera moves to achive this setup of features, then the 4 new "edge points" will correspond to the projections of 4 different real points of the pipe and the theory collapses. Am I right to think that? Any way to get past it?
Any oher ideas or relevant literature?
|
For my robotics project I would like to utilise readily available mobile phone 'power banks' to simplify the power system for my robot. However, such power banks output 5V, great for the logic systems but not for the motors.
I was wondering if I could wire the outputs of two power banks in series and get 10V or is this a very bad idea? Should I wire them in parallel and use a boost converter? Is a custom solution using 'ordinary' Li-Po batteries and associated charging circuit the best answer?
Additional Information:
This will be a two wheeled robot.
5V Logic
7+V Motor driver
Power Banks: 5V 2.1Amp 2100mAh
|
I have built an R/C Lawnmower. I call it the Honey Badger, because it tears stuff up (that's a good thing). Well, I used used batteries to get the project going and now it's long past time to get the Honey Badger going again.
The Honey Badger is built on an electric wheelchair frame, and originally used wheelchair deepcycle batteries. U1 if I recall. There are 4 of them wired in 2 banks in series and parallel to give 24V for the 24V motors.
Going down to the used wheelchair parts place is about an hour drive and requires a weekend visit and will get me used batteries of unknown condition.
Contrast that with Harbor Freight, which is 20 minutes away and has solar batteries the same physical dimensions and comparable (?) electrical characteristics. I think with coupons, tax, and after playing the game, I can get a battery for ~$50, about the same price as a used U1.
I found that Amazon also has U1 batteries, and they can be had for ~\$120 for 2 with shipping.
Batteries plus will sell me some deepcycle auto batteries of greater Ah capacity for ~$100 each.
Gross for each solution winds up being around the same: ~\$240 - ~\$300.
Is there a difference in technology between a "solar battery" and a "wheelchair battery"? Is that difference substantial? Given that I'm pretty rough with this thing, is any particular technology any better suited to these tasks? Is there a benefit or drawback to using an automotive battery?
I have the charger from the original wheelchair and if I recall, it's good for the capacity and has room to spare. I think it can put out 5 amps.
|
I'm seeing a behavior in my RoboClaw 2x7 that I can't explain. I've been trying to manually tune the velocity PID settings (I don't have a windows box so I can't use Ionmc's tuning tool) by using command 28 to set the velocity PID gains, then command 55 to verify that they're set correctly, then 35 to spin the wheel at half of its maximum speed. The problem is that no combination of PID gains seems to make any difference at all. I've set it to 0,0,0 and the motor still spins at roughly the set point.
I must be doing something wrong, but I'm pouring over the datasheet and I just don't see what it is. By all rights the motor shouldn't spin when I use 0,0,0! Any ideas?
|
I am trying to use a quaternions for robotics and there is one thing I don't understand about it. Most likely because I don't understand how to define position with quaternions and how to define rotation with quaternions if there is any difference..
Please watch my "understanding steps" and correct if I am wrong somewhere.
Lets assume I we have 2 vehicle positions described by 2 rotation quaternions:
$$
q_1 = w_1 + x_1i + y_1j +z_1k = \cos(\pi/4) + \sin(\pi/4)i
$$
This quaternion is normalized and represents rotation over the $x$ axis for $\pi/2$ angle as I understand it.
$$
q_2 = w_2 + x_2i + y_2j + z_2k = \cos(\pi/4) + \sin(\pi/4)k
$$
And this one represents rotation for the same angle $\pi/2$ over the $y$ axis.
$q_1*q_2 = q_3$ which would be the same rotation as if we made $q_1$ first and $q_2$ second.
$$q_3 = \frac{1}{2} + \frac{i}{2} +\frac{j}{2} +\frac{k}{2}$$
QUESTION 1: Given $q_2$ and $q_3$ how can I find $q_1$?
QUESTION 2: How to find a rotation angle over some other vector, given rotation quaternion? For example I want to find on what angle does $q_3$ turned over $2i+j-k$ quaternion.
|
I am new working with robotic arms but I am having trouble finding the correct servo for the base of the arm.
It is a 2 link robot - each link weighs 1.2 kg and is 40 cm long. I have a gripper of 10 centimeters. The servo in the gripper can hold a max of 4kg. The whole robotic arm, including the maximum load it will carry and the servos and other accessories, is 8.3 kg. The maximum load it needs to carry is 4 kg at the end of the arm at 90 cm.
What servo could I use to move the rotary base and what servo could I use to lift the arm in the base? The last one is to move the link so it would be preferable to have a 2 axis servo.
The only specification I need right now is what servo to use my energy supply are two 12 volts DC batteries connected in series with 18Ah. I need the servo to be DC. The other things can be worked around the servo that can best do the work.
|
I was reading these papers on visual inertial odometry from IROS 15:
Semi-Direct EKF-based Monocular Visual-Inertial Odometry
Robust Visual Inertial Odometry Using a Direct EKF-Based Approach
I would appreciate if someone could explain how semi-direct and direct methods vary exactly? As far as I understand, direct methods use pixel intensities in their framework. However, both these papers listed above use photometric intensities/pixel intensity values and yet one is semi direct and the other's direct.
|
Good day, I would like to ask how is it possible to use an ultrasonic sensor for altitude hold in a quadcopter if the sampling rate of the Ultrasonic sensor (HC-SR04) is only 20Hz before incurring any errors through polling when I had tested it. I have seen this sensor being implemented on other projects however I could not find any papers that explain the use of this sensor in better detail. I have seen possible solutions on the raspberry pi one using interrupts and the other using Linux's multithreading.
If my understanding is right, to use interrupts, I need a some sort of data ready signal from the ultrasonic sensor. However this is not available in this particular sensor. Is it possible to use the echo pin as the positive edge trigger for the interrupt service routine (read sonar/calculate distance function). But would this not introduce inconsistent loop execution times which is bad for a consistent PID loop.
Another approach is to use multithreading using the wiring-pi library which enables me to run a function, let's say a function that triggers the sonar and calculates the distance along side the pid control loop. How would this affect the PID control loop rate?
Which is the best way to implement sonar sensor based altitude hold?
|
I am working on a project with the Create 2. Just recently I have run into a problem with the battery state. The Create 2 has been charging all night so its clean light shows green. However, when I unplug it and press the clean button, it shows red and will not consistently run commands from my Arduino that I have hooked up to it.
What could be the problem?
|
I have a chance to develop a user interface program that lets the user control a KUKA robot from a computer. I know how to program stuff with the KUKA utilities, like OrangeEdit, but I don't know how to do what I want to do. I don't even know what's the "best" language to talk to the robot.
My idea is to control the robot with the arrow buttons, like up/down controls the Z axis and left/right controls the X/Y axes.
Can someone help me here? I know there's a lot of libraries to control the robot even with an Xbox controller, but if I limit the robot to 3 axes I might be able to control with simple buttons.
Edit: Now imagine that i have a routine that consists on going from P1 to P2 then to P3. I know i can "touch up" the points to refresh its coordinates using the console, but can i do it in a .net application? like modifying the src/srcdat files?
|
I am trying to understand the effect of drift in Simultaneous Localization and Mapping (SLAM). My understanding is that drift occurs because the robot tracks its position relative to a set of landmarks it is storing, but each landmark has a small error in its location. Therefore, an accumulation of these small errors over a long trajectory causes a large error by the end of the trajectory.
However, what I am confused about is what would happen when the robot tracks its way back to its starting positions. Suppose the robot starts in position A, and then starts to move along a path, mapping the environment as it does so, until it reaches position B. Now, the robot will have some error in its stored position of B, due to the drift during tracking. But then suppose the robot makes its way back to A, by tracking relative to all the landmarks it created during the first path. When it reaches A, will it be back at the true position of A, i.e. where it started the first path? Or will it have drifted away from A?
My intuition is that it will end up at the true position of A, because even though the landmarks have errors in them, as long as the error is not too large then the robot will eventually get back to the position where it stored the landmarks for A. And once it is there, those landmarks are definitely correct, without error, because they were initialized before any drift errors had started to accumulate.
Any help? Thanks!
|
With introduction of incremental sampling algorithms, like PRM and RRT planning in higher dimensional spaces in reasonable computation time has become possible though it is PSPACE-hard. But why is a quadrotor motion planning problem still difficult even with simplified quadrotor model?
I was solving a dynamic car problem with OMPL, which produced solution within 10s but I set a planning time of 100s for quadrotor, but it still does not find a solution.
|
Reading this paper on visual odometry, where they have used a bearing vector to parameterize the features. I am having a hard time understanding what the state propagation equation for the bearing vector term means :
The vector N is not mentioned in the equations, so its not very clear what it does. Would really appreciate if someone would help me understand it :)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.