instruction
stringlengths 40
28.9k
|
---|
I am new to robotics, however I am designing a PID to control the angular velocity (azimuth, elevation) of a couple of Faulhaber motors.
The input to the PID control is the actual angular velocity, which is not observed though, since it is derived from the position of each motor at time $t$.
The PID sample period is aprox. 30 ms, whereas the input data rate from the joystick is aprox. 300 samples/s, corresponding to a sample period of 3.33 ms. The joystick input gets transformed into the desired angle speed, that the PID will control.
I was initially filtering position data using a normal 2D linear Kalman Filter, but the angular velocity is not linear (by formula), hence I switched to Extended Kalman filtering.
My questions are the following:
Is this latter approach that makes use of EKF correct?
Which are the parameters that I have to check in order to properly set the update rate of the PID loop?
Thx in advance!
|
I am trying to exert a desired load of 0.07 N.m on a BLDC motor shaft whose length is 0.750in and diameter is 0.3125in (0.008m). I can go to a machine shop and get a small adjustable cylindrical coupling made for my shaft. But I need it to exert close to desired torque at a speed of 2100 rpm (220 rad/s). I tried doing some calculations, according to the formula
Torque = speed * mass * (radius)^2
If I solve this equation with T = 0.07 N.m, speed = 220 rad/sec, radius = 0.004 m, I get around 20 kg for mass, which is huge!!!. It is more than the mass of the motor. Can you please suggest a convenient way to load the motor. Thank you.
|
Is it possible to transmit live audio/video feed and at the same time, receive commands through UART using only 1 RF transceiver connected to the Arduino board?
I want to control the Arduino through serial communication (UART) which can be accomplished by using RF connection to control it from a remote. I also want to transmit live audio and video feed from the Arduino using the same RF transceiver. Is this possible?
I found AVCTP, but I'm not sure if it enables serial communication. Also, I don't like to use Bluetooth for some reasons.
Thanks in advance!
|
I have an STM32F3 discovery board. I want to go to the next step and I want to try to use timers in a few configurations.
How can I calculate variables (such as prescaler, period)? I looked in all datasheets, manuals and didn't find anything that can describe these values as - Input capture mode, OP, PWM, etc.
I think that prescaler is for downgrading a frequency from 1-65575.
So if I have fcpu=72MHz and want to generate a signal of frequency=40kHz, am I supposed to do: 72MHz/40kHz=1800?
Now should I subtract this prescaler with -1?
|
I am trying to use C++ to talk to the Create 2 robot. Does anyone have basic code to write/read from the Create 2 using C++ or C?
I am having trouble with converting Create 2 commands (like 145) into one char.
|
Do structured light camera sensors like the structure.io, Intel RealSense or Microsoft Kinect work outdoors?
I read these sensors wont work outdoors because of ambient IR light. Can someone provide this with proper references/tests? I mean what degree of IR illumination is needed for the sensor to stop working etc.
There are videos on YouTube that show Microsoft Kinect working outdoors:
Prairie Dog II: UGV Kinect Sensor Outside - limited outdoors range
Outdoor Kinect Data Collection - heavy interference with direct sunlight
However, the (not yet released) new Intel RealSence R200 specification says "range up to 3-4 meters indoors, longer range outdoors" while the older F200 says "0.2 meters - 1.2 meters, indoors only". I am really interested in seeing if the R200 will really work outdoors.
|
I'm trying to use a dual quaternion Hand Eye Calibration Algorithm Header and Implementation, and I'm getting values that are way off. I'm using a robot arm and an optical tracker, aka camera, plus a fiducial attached to the end effector. In my case the camera is not on the hand, but instead sitting off to the side looking at the arm.
The transforms I have are:
Robot Base -> End Effector
Optical Tracker Base -> Fiducial
The transform I need is:
Fiducial -> End Effector
I'm moving the arm to a series of 36 points on a path (blue line), and near each point I'm taking a position (xyz) and orientation (angle axis with theta magnitude) of Camera->Fiducial and Base->EndEffector, and putting them in the vectors required by the HandEyeCalibration Algorithm. I also make sure to vary the orientation by about +-30 degrees or so in roll pitch yaw.
I then run estimateHandEyeScrew, and I get the following results, and as you can see the position is off by an order of magnitude.
[-0.0583, 0.0387, -0.0373] Real
[-0.185, -0.404, -0.59] Estimated with HandEyeCalib
Here is the full transforms and debug output:
# INFO: Before refinement: H_12 =
-0.443021 -0.223478 -0.86821 0.321341
0.856051 -0.393099 -0.335633 0.470857
-0.266286 -0.891925 0.36546 2.07762
0 0 0 1
Ceres Solver Report: Iterations: 140, Initial cost: 2.128370e+03, Final cost: 6.715033e+00, Termination: FUNCTION_TOLERANCE.
# INFO: After refinement: H_12 =
0.896005 0.154992 -0.416117 -0.185496
-0.436281 0.13281 -0.889955 -0.404254
-0.0826716 0.978948 0.186618 -0.590227
0 0 0 1
expected RobotTipToFiducial (simulation only): 0.168 -0.861 0.481 -0.0583
expected RobotTipToFiducial (simulation only): 0.461 -0.362 -0.81 0.0387
expected RobotTipToFiducial (simulation only): 0.871 0.358 0.336 -0.0373
expected RobotTipToFiducial (simulation only): 0 0 0 1
estimated RobotTipToFiducial: 0.896 0.155 -0.416 -0.185
estimated RobotTipToFiducial: -0.436 0.133 -0.89 -0.404
estimated RobotTipToFiducial: -0.0827 0.979 0.187 -0.59
estimated RobotTipToFiducial: 0 0 0 1
Am I perhaps using it in the wrong way? Is there any advice you can give?
|
I'm wondering that, PID control is a linear control technique and the robot manipulator is a nonlinear system, so how it is possible to apply PID control, in this case. I found a paper named: PID control dynamics of a robotic arm manipulator with two
degrees of freedom. on slide share page, is this how we use PID control for robotic arm, is there any name for this approach? and how to remove the ambiguity that PID is linear control technique and the robot is nonlinear system. Any suggestions?
|
I am working on a project that involves speed regulation of a BLDC motor under no-load and load conditions. I wish to use another machine operated as generator, acting as load on the motor, as shown in this video.
The coupling used in this motor/generator arrangement looks handmade out of a rubber tube or somethhing. I am considering using it as an alternative to a flexible coupling. Purchasing an actual flexible coupling is not an option for me. Moreover, I need the coupling on an urgent basis.
My question is, can this arrangement (or something similar) be used to couple a 15W motor to a similar rating machine, if the rated torque is not exceeding 0.1 N.m?
|
I am very new to robotics. but I will be writing algorithm for my robot to move around and gather information from its surroundings and process it. It will also process audio-visual signals. but I am in confusion about which micro-controller to use so it would be performance efficient and consumes less power.
The controller should also be capable of communication with wireless network (internet through wi-fi) and should also support memory integration.
Also I know to program in Java and C. please suggest which would be the best language to use for programming.
Thanks.
P.S. I would really like to use a microprocessor as it is highly customizable. Please suggest the best to use
|
I have recently been asked to review a raspberry pi hat (from a programming view) that will allow PWM control of upto 16 servos, however I am hoping to use this time to work on a hexapod idea I have been thinking about for a while, which requires a minimum of 18 servo's, and preferably 20 (camera/sensor pan and tilt).
My question is:
What is a relatively cheap and uncomplicated way of extending my control over those extra 4 servo's?
It would appear most servo controller hat/shiels for arduino and raspi are upto 16 servos, and can be extended by buying another shield, are there any other options?
Any advice in this subject would be greatly appreciated, preferably dumbed down a bit, and I don't know a great deal about micro controller hardware (more of a software guy)
|
How can I test whether my gazebo installation works properly or not? I'm trying to "save myworld" and "save as" options but no window is shown.
|
Do we have to build ROS for robotic research/application? What is the main advantage? When or in which situations ROS is mandatory?
|
Let's say my redundant robot is at an operationnal position $x$. Is the set of all possible joint configuration "continuous", which would mean that it is possible to explore all the possible configurations without moving the end effector. Is there a way to show that it is true or false? I am using a Kuka LBR robot with 7 dof so maybe there is a specific answer for this one.
I have been searching it and did not find any result but I will gladly accept any link or answer that you may have.
|
I'm looking to find a way to operate a small servo using a 4-20ma linear analog signal generated by a PLC in an industrial setting.
The purpose of this is to allow for automation for a task that currently is done by manually turning and adjusting a potentiometer with removable dial. Basically, I'm trying to ghetto together an oldschool Motor Operated Potentiometer (MOP) so it can be removed quickly and easily without affecting the operation of the original process.
I've spent hours looking for servo controllers/encoders that are capable of this, but I haven't been able to find any. Any way I could get pointed in the right direction would be fantastic. Surely such a thing must exist!
Thanks so much!
|
I have a 2-link, 2 degree of freedom robotic arm, that only measures linear acceleration at each link(through an accelerometer), and rotational velocity on each joint (through a gyroscope).
I know that through using the Jacobian matrix, I can compute link velocity and acceleration from joint angles, and through the inverse of the matrix I can compute joint velocities from joint angles and link acceleration.
However, I am not sure if I can compute joint angles using only the link linear and rotational acceleration? I am aware that the joint angle could be estimated by integrating the joint velocities (and applying some sort of filter), but is there an algebraic way this can be computed? It doesn't seem likely to me.
|
I noticed that the create2 does not always provide sensor data while it's moving. Am I supposed to stop the robot, request sensor data then start it again? or am I missing something? it seems to work most of the time but once in a while I get no data back. I am trying to make it move from one point to another by starting it and then reading distance to see how far it travels every .1 seconds but sometimes it I just keeping getting no data.
I noticed this using python and C code as well.
I am using the USB port with the bit rate they recommend (115200).
|
I'm searching for a cheap (under 100$) and efficient 3D sensor, which detects obstacles and moving objects, for robot applications like quadrotor navigation, swarm robotics, etc. Can you suggest a sensor that can be either a commercial product or a "do it yourself" project?
|
I'm a complete beginner in robotics with background in programming...
I started thinking about a robot project yesterday and want to order some motors to test with. I saw the specs for the motors, torque etc, and I think I remember enough physics from high school to do simple calculations. For example, if the motor rotates an arm, then given the torque and the length of the arm, how much could it lift? Also, if it doesn't lift it straight up, but at an angle, I could with a bit of thinking tweak the calculations a bit... If there would be several joints attached to each other, the calculations would be more complex, but I could then probably create a program in node.js, for example, to be able to experiment with different values.
However, I would assume that these kinds of calculations would be very common when designing robots. So I would assume there are already programs for these types of calculations created. I don't know exactly what I mean with "these types of calculations", because I don't know yet what I don't know, so would like to ask which programs are there that you guys use for making calculations when you design your robots?
Preferable they should be open source...
|
I am planning a tank like robot for hobby purpose. I have control engineering background, however I never applied on robotics.
I would like to test different control theory, namely MPC. I saw a lot of publications regarding the kinematics and inverse kinematics of such a robot, however I am wondering if somebody can point out regarding the dynamics modelling of such a system,taking into account the forces, mass etc?
|
given my control system
I have found the region of the complex space that satisfies my specifications, determining poles position in 0.5 +- 0.2i.
Now I want to find the gains that fix the desider pole (with matlab), but I have not understand well how to do it: anyone can suggest me an example on how to do that, with or without matlab?
Thanks
Edit: in the first image the sum blocks are +-, not ++
|
I teach FTC robotics to high school students, and while I'm a proficient programmer and can teach them coding fairly well, my mechanical skills are a bit soft. I'm looking for good sources for myself and the students to go through that gets a little more in depth than "this is a gear, this is a chain, this is gear ratio, etc.," but maybe not quite the level of building professional / industrial robots.
I've used the Vex Robotics Curriculum as a starting reference - http://curriculum.vexrobotics.com/curriculum - but it doesn't go through some more advanced topics (for example, how to drive a single gear / drive shaft with multiple motors to achieve more power without having to gear down and lose speed.)
Are there any good intermediate sources like this? Do I need to just bit the bullet and get a college level mechanics text?
|
I am trying to build a quadcopter from scratch. I have selected few parts but I have no idea whether the quadcopter will come together and fly.
I would appreciate your feedback on whether the parts I have selected are compatible (UBEC, Motor). If not, I would appreciate suggestions.
The frame for my quadcopter is in the X configuration and I am making my own. I am expecting the average weight of the quad to be around 800g. I hope the motors and prop combination can hover it well.
|
I saw a video of a robot used in special education with children on the autism spectrum (https://www.youtube.com/watch?v=FQcjfebQXgQ). My son isn't autistic, he has Tourette Syndrome, ADHD, executive function problems, and OCD. A robot could be quite helpful for him. Where can I buy one? I don't need it to look like a human being. It just needs to be interactive and reasonable cute. As my son is getting ready for bed, he needs someone to talk him through his steps, give him positive feedback, and ask questions like "Okay, you're in your pajamas. Great! What else do you need to do to get ready for bed?" And the robot would have a mental list (preprogrammed) of everything that's needed (brush teeth, wash face, put on eczema ointment, put dirty clothes in hamper). My son is 12 and would like to get ready by himself -- without Mama or Papa -- but he gets sidetracked when he's in his room on his own. The robot doesn't need to be able to "see" him brushing his teeth. He just needs to be able to hear my son saying, "I brushed my teeth." Because when the two of them together decide he has made it through his routine, then they can call me in, and I'll check, and then we'll do our bedtime reading.
That's an example of what I have in mind. There are other situations where I could imagine a robot being helpful for him.
|
I have a stepper motor which has an internal controller.
I would like to determine them both, but don't know how i should approach the problem.
the system receives a input velocity and position, and moves toward that position using that velocity. The input could also just be a velocity.
The plant is a Pan and tilt unit, which has 2 stepper motors. I Tried with ident but only got a fit of 5 %... My input was a noisy signal, and output was the position it writes out.
|
Over the past few weeks, I have been attempting to interface the iRobot Create 2 with an Arduino Uno. As of yet, I have been unable to read sensor values back to the Arduino. I will describe by hardware setup and my Arduino code, then ask several questions; hopefully, answers to these questions will be helpful for future work with the Create 2.
Hardware:
The iRobot Create 2 is connected to the Arduino Uno according to the suggestions given by iRobot. Instead of the diodes, a DC buck converter is used, and the transistor is not used because a software serial port is used instead of the UART port.
Software:
The following is the code that I am implementing on the Arduino. The overall function is to stop spinning the robot once the angle of the robot exceeds some threshold. A software serial port is used, which runs at the default Create 2 Baud rate.
#include <SoftwareSerial.h>
int rxPin=3;
int txPin=4;
int ddPin=5; //device detect
int sensorbytes[2]; //array to store encoder counts
int angle;
const float pi=3.1415926;
#define left_encoder (sensorbytes[0])
#define right_encoder (sensorbytes[1])
SoftwareSerial Roomba(rxPin,txPin);
void setup() {
pinMode(3, INPUT);
pinMode(4, OUTPUT);
pinMode(5, OUTPUT);
pinMode(ledPin, OUTPUT);
Roomba.begin(19200);
// wake up the robot
digitalWrite(ddPin, HIGH);
delay(100);
digitalWrite(ddPin, LOW);
delay(500);
digitalWrite(ddPin, HIGH);
delay(2000);
Roomba.write(byte(128)); //Start
Roomba.write(byte(131)); //Safe mode
updateSensors();
// Spin slowly
Roomba.write(byte(145));
Roomba.write(byte(0x00));
Roomba.write(byte(0x0B));
Roomba.write(byte(0xFF));
Roomba.write(byte(0xF5));
}
void loop() {
updateSensors();
// stop if angle is greater than 360 degrees
if(abs(angle)>2*pi){
Roomba.write(173);
delay(100);
}
}
void updateSensors() {
// call for the left and right encoder counts
Roomba.write(byte(148));
Roomba.write(byte(2));
Roomba.write(byte(43));
Roomba.write(byte(44));
delay(100);
// load encoder counts into an array
int i = 0;
while(Roomba.available()) {
int c = Roomba.read();
sensorbytes[i] = c;
i++;
}
angle=((right_encoder*72*pi/508.8)-(left_encoder*72*pi/508.8))/235;
}
Questions:
Am I loading the sensor values into the array correctly? This same code works when a bump and run program is implemented, but that requires knowing only one bit rather than two bytes.
How many bytes can be read over the serial connection at a time? A previous post (Help sending serial command to Roomba) highlights that one byte can be sent at a time. Does this imply that the reverse is true? If so, would a solution be to use a char array to read the values instead and then to append two chars to form an signed int?
Is serial communication synchronization a problem? I am assuming that synchronization is not a problem, but is it possible for the bytes to be split on the nibble boundaries? This would present a problem because there is not a nibble datatype.
|
I want to move a robot to a certain distance say 1 meter.
What are the different ways that i can implement to do so?
For example I can measure the circumference of the wheel and assign time of rotation to move it. what are other techniques to achieve this?
|
I was having problems reading sensor information from my Irobot Create 2 and sent an email asking for help from the Irobot staff. They were super helpful and gave me an answer(the next day!!!) that helped push along my project. I was requesting data from the create2 to print to the screen so I could figure out how to write a code that would read the data. I started with this section of code that was not working for me (I trimmed some of the code off that controlled other functions):
from Tkinter import *
from subprocess import call
import datetime
import serial
import ttk
import struct
import thread
port = '/dev/ttyUSB0' #sets the com port for Atlas
baud = 115200 #sets the baud rate
connection = serial.Serial(port, baud) #starts the serial communication
#program to read communication from create2
def program2(threadName):
while True:
x = connection.read()
print x
#program to write to create2
def program1(threadName):
atlas = Tk() #starts a new GUI for atlas control
atlas.geometry('1000x500') #sets the size of the control window
atlas.title('Atlas Control Panal') #sets the name of the control window
def sendCommandASCII(command):#used to send a command to the create2
cmd = ""
for v in command.split():
cmd += chr(int(v))
sendCommandRaw(cmd)
def sendCommandRaw(command):#used to send a command to the create2
global connection
try:
if connection is not None:
connection.write(command)
else:
tkMessageBox.showerror('Not connected!', 'Not connected to a robot!')
print "Not connected."
except serial.SerialException:
print "Lost connection"
tkMessageBox.showinfo('Uh-oh', "Lost connection to the robot!")
connection = None
def test():#sets a test command up to check connection
global buttonpress
buttonpress='test'
sendCommandASCII('142 7')
#makes a button on the GUI that starts the test command
button1 = Button(atlas, text = 'test mode', command=test)
button1.place(x=600, y=400)
button1.config(width=10, height=5)
atlas.mainloop() #runs the GUI as a loop to keep the window open.
#runs the read and the write program at the same time
try:
thread.start_new_thread(program1, ("program1",))
thread.start_new_thread(program2, ("program2",))
except Exception, e:
print str(e)
They told me that the code was actually working fine but I was trying to print out the value of the sensor packet without parsing it in any way. They then recommended I change the code in program2 to this:
while True:
def toHexFromByte(val):
return hex(ord(val))[2:}.rjust(2, '0').upper()
x = connection.read()
for b in x:
print toHexFromByte(b)
this works beautifully and prints to the screen if the bumper is pressed or a wheel drops. My question is how to deal with responses that are longer than one byte (ie Packet ID: 22 for voltage)?
When I try Packet ID: 22 it prints to screen and it sends the high byte of 3F and a low byte of D7. I can manually combine them to get 3FD7 and convert it to a decimal of 16.343 Volts but I would like to figure a way to print to screen the the voltage by having my program do this for me. Many of the sensors send data in 2 bytes and I need a way to make sure that it is combined automatically.
Robb
|
I would like to know what are the differences between Positioning and Localization Systems. In most review papers they are used interchangeably. Are they the same?
For example:
GPS(Global Positioning system): gives coordinates of receiver and
SLAM(Simultaneous localization and mapping): constructing or updating a map of an unknown environment
Is difference :
Positioning: only gives information about receiver coordinates.No information about enviorement
Localization: gives information about receiver coordinates and also enviorement. positioning is a subtopic of localization
|
I have a requirement for a motor that pulls a piece of rope until the rope is taught. However I'm at a loss as to how to achieve this, I'm sure it must've been done before but I'm not sure how to best describe this in a way that would get me more results. I wondered if there are any sensors or pre-established methods for sensing resistance to motion in electrical motors?
|
I am at the moment trying to identify a system using frequency sweep. I have been using Mathematica and have created a frequency sweep as such.
g[t_] := 0.799760*Sin[2 Pi (3 t/333.3 + 1) t];
Plot[g[t], {t, 0, 1000000000000000000000000000000}]
g[#] & /@ Range[0, 5, 0.001]
The max frequency is 10 Hz, I sample the data using 1000 Hz. But at what rate should I input it to the system, and what rate should I read from it?
|
This post is a follows from an earlier post (iRobot Create 2: Angle Measurement). I have been trying to use the wheel encoders to calculate the angle of the Create 2. I am using an Arduino Uno to interface with the robot.
I use the following code to obtain the encoder values. A serial monitor is used to view the encoder counts.
void updateSensors() {
Roomba.write(byte(149)); // request encoder counts
Roomba.write(byte(2));
Roomba.write(byte(43));
Roomba.write(byte(44));
delay(100); // wait for sensors
int i=0;
while(Roomba.available()) {
sensorbytes[i++] = Roomba.read(); //read values into signed char array
}
//merge upper and lower bytes
right_encoder=(int)(sensorbytes[2] << 8)|(int)(sensorbytes[3]&0xFF);
left_encoder=int((sensorbytes[0] << 8))|(int(sensorbytes[1])&0xFF);
angle=((right_encoder*72*3.14/508.8)-(left_encoder*72*3.14/508.8))/235;
}
The code above prints out the encoder counts; however, when the wheels are spun backwards, the count increases and will never decrement. Tethered connection to the Create 2 using RealTerm exhibits the same behavior; this suggests that the encoders do not keep track of the direction of the spin. Is this true?
|
Lately I've been interested in comparing the energy density of model rocket engines to lithium polymer batteries (attached to motors and propellers) for propelling things upwards.
To get a feel for this, I decided to compare an Estes C6-5 motor to a 3DR Iris + quadcopter.
Estes C6-5 has initial mass of 25.8g, and produces 10 N s total impulse. So, the "Impulse density" is about 10 N s / 25.8g = 0.38 N s g^-1.
3DR Iris+ weighs 1282g without battery. 3.5ah battery weighs 250g and will power hover for about 20 minutes (so about 10.5a draw). Thrust produced to hover on Earth is 9.8N kg^-1 * 1.532 kg = 14.7N. "Impulse density" is 14.7N * 1200s / 250g = 70.6 N s g^-1 .
So, according to my math here, the LiPo is about 0.38/70.6 = 186 times more energy dense than the model rocket engine.
Of course, the model rocket engine will lose 12.48g of propellant by the end of the flight so it will be effectively a little lighter, but that's not going to affect things by a factor of 100.
Does this seem right to you? Am I missing anything?
|
The quadrotor system is multi-ODEs equations. The linearized model is usually used especially for position tracking, therefore one can determine the desired x-y positions based on the roll and pitch angles. As a result, one nested loop which has inner and outer controllers is needed for controlling the quadrotor. For implementation, do I have to put while-loop inside ode45 for the inner attitude controller? I'm asking this because I've read in a paper that the inner attitude controller must run faster (i.e. 1kHz) than the position controller (i.e. 100-200 Hz). In my code, both loops run at 1kHz, therefore inside ode45 there is no while-loop. Is this correct for position tracking? If not, do I have to insert while-loop inside ode45 for running the inner loop? Could you please suggest me a pseudocode for position tracking?
To be more thorough, the dynamics equations of the nonlinear model of the quadrotor is provided here, if we assume the small angles, the model is reduced to the following equations
$$
\begin{align}
\ddot{x} &= \frac{U_{1}}{m} ( \theta \cos\psi + \phi \sin\psi) \\
\ddot{y} &= \frac{U_{1}}{m} ( \theta \sin\psi - \phi \cos\psi) \\
\ddot{z} &= \frac{U_{1}}{m} - g \\
\ddot{\phi} &= \frac{l}{I_{x}} U_{2} \\
\ddot{\theta} &= \frac{l}{I_{y}} U_{3} \\
\ddot{\psi} &= \frac{1}{I_{z}} U_{4} \\
\end{align}
$$
The aforementioned equations are linear. For position tracking, we need to control $x,y,$ and $z$, therefore we choose the desired roll and pitch (i.e. $\phi^{d} \ \text{and} \ \theta^{d}$)
$$
\begin{align}
\ddot{x}^{d} &= \frac{U_{1}}{m} ( \theta^{d} \cos\psi + \phi^{d} \sin\psi) \\
\ddot{y}^{d} &= \frac{U_{1}}{m} ( \theta^{d} \sin\psi - \phi^{d} \cos\psi) \\
\end{align}
$$
Therefore, the closed form for the desired angles can be obtained as follows
$$
\begin{bmatrix}
\phi_{d} \\
\theta_{d}
\end{bmatrix}
=
\begin{bmatrix}
\sin\psi & \cos\psi \\
-\cos\psi & \sin\psi
\end{bmatrix}^{-1}
\left( \frac{m}{U_{1}}\right)
\begin{bmatrix}
\ddot{x}^{d} \\
\ddot{y}^{d}
\end{bmatrix}
$$
My desired trajectory is shown below
The results are
And the actual trajectory vs the desired one is
My code for this experiment is
%%
%######################( Position Controller )%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear all;
clc;
dt = 0.001;
t = 0;
% initial values of the system
x = 0;
dx = 0;
y = 0;
dy = 0;
z = 0;
dz = 0;
Phi = 0;
dPhi = 0;
Theta = 0;
dTheta = 0;
Psi = pi/3;
dPsi = 0;
%System Parameters:
m = 0.75; % mass (Kg)
L = 0.25; % arm length (m)
Jx = 0.019688; % inertia seen at the rotation axis. (Kg.m^2)
Jy = 0.019688; % inertia seen at the rotation axis. (Kg.m^2)
Jz = 0.039380; % inertia seen at the rotation axis. (Kg.m^2)
g = 9.81; % acceleration due to gravity m/s^2
errorSumX = 0;
errorSumY = 0;
errorSumZ = 0;
errorSumPhi = 0;
errorSumTheta = 0;
pose = load('xyTrajectory.txt');
DesiredX = pose(:,1);
DesiredY = pose(:,2);
DesiredZ = pose(:,3);
dDesiredX = 0;
dDesiredY = 0;
dDesiredZ = 0;
DesiredXpre = 0;
DesiredYpre = 0;
DesiredZpre = 0;
dDesiredPhi = 0;
dDesiredTheta = 0;
DesiredPhipre = 0;
DesiredThetapre = 0;
for i = 1:6000
% torque input
%&&&&&&&&&&&&( Ux )&&&&&&&&&&&&&&&&&&
Kpx = 50; Kdx = 8; Kix = 0;
Ux = Kpx*( DesiredX(i) - x ) + Kdx*( dDesiredX - dx ) + Kix*errorSumX;
errorSumX = errorSumX + ( DesiredX(i) - x );
dDesiredX = ( DesiredX(i) - DesiredXpre ) / dt;
DesiredXpre = DesiredX(i);
%&&&&&&&&&&&&( Uy )&&&&&&&&&&&&&&&&&&
Kpy = 100; Kdy = 10; Kiy = 0;
Uy = Kpy*( DesiredY(i) - y ) + Kdy*( dDesiredY - dy ) + Kiy*errorSumY;
errorSumY = errorSumY + ( DesiredY(i) - y );
dDesiredY = ( DesiredY(i) - DesiredYpre ) / dt;
DesiredYpre = DesiredY(i);
%&&&&&&&&&&&&( U1 )&&&&&&&&&&&&&&&&&&
Kpz = 100; Kdz = 20; Kiz = 0;
U1 = Kpz*( DesiredZ(i) - z ) + Kdz*( dDesiredZ - dz ) + Kiz*errorSumZ;
errorSumZ = errorSumZ + ( DesiredZ(i) - z );
dDesiredZ = ( DesiredZ(i) - DesiredZpre ) / dt;
DesiredZpre = DesiredZ(i);
%#######################################################################
%#######################################################################
%#######################################################################
% Desired Phi and Theta
R = [ sin(Psi),cos(Psi);
-cos(Psi),sin(Psi)];
DAngles = R\( (m/U1)*[Ux; Uy]);
%Wrap angles
DesiredPhi = wrapToPi( DAngles(1) ) /2;
DesiredTheta = wrapToPi( DAngles(2) );
%&&&&&&&&&&&&( U2 )&&&&&&&&&&&&&&&&&&
KpP = 100; KdP = 10; KiP = 0;
U2 = KpP*( DesiredPhi - Phi ) + KdP*( dDesiredPhi - dPhi ) + KiP*errorSumPhi;
errorSumPhi = errorSumPhi + ( DesiredPhi - Phi );
dDesiredPhi = ( DesiredPhi - DesiredPhipre ) / dt;
DesiredPhipre = DesiredPhi;
%--------------------------------------
%&&&&&&&&&&&&( U3 )&&&&&&&&&&&&&&&&&&
KpT = 100; KdT = 10; KiT = 0;
U3 = KpT*( DesiredTheta - Theta ) + KdP*( dDesiredTheta - dTheta ) + KiT*errorSumTheta;
errorSumTheta = errorSumTheta + ( DesiredTheta - Theta );
dDesiredTheta = ( DesiredTheta - DesiredThetapre ) / dt;
DesiredThetapre = DesiredTheta;
%--------------------------------------
%&&&&&&&&&&&&( U4 )&&&&&&&&&&&&&&&&&&
KpS = 80; KdS = 20.0; KiS = 0.08;
U4 = KpS*( 0 - Psi ) + KdS*( 0 - dPsi );
%###################( ODE Equations of Quadrotor )###################
%===================( X )=====================
ddx = (U1/m)*( Theta*cos(Psi) + Phi*sin(Psi) );
dx = dx + ddx*dt;
x = x + dx*dt;
%===================( Y )=====================
ddy = (U1/m)*( Theta*sin(Psi) - Phi*cos(Psi) );
dy = dy + ddy*dt;
y = y + dy*dt;
%===================( Z )=====================
ddz = (U1/m) - g;
dz = dz + ddz*dt;
z = z + dz*dt;
%===================( Phi )=====================
ddPhi = ( L/Jx )*U2;
dPhi = dPhi + ddPhi*dt;
Phi = Phi + dPhi*dt;
%===================( Theta )=====================
ddTheta = ( L/Jy )*U3;
dTheta = dTheta + ddTheta*dt;
Theta = Theta + dTheta*dt;
%===================( Psi )=====================
ddPsi = (1/Jz)*U4;
dPsi = dPsi + ddPsi*dt;
Psi = Psi + dPsi*dt;
%store the erro
ErrorX(i) = ( x - DesiredX(i) );
ErrorY(i) = ( y - DesiredY(i) );
ErrorZ(i) = ( z - DesiredZ(i) );
% ErrorPhi(i) = ( Phi - pi/4 );
% ErrorTheta(i) = ( Theta - pi/4 );
ErrorPsi(i) = ( Psi - 0 );
X(i) = x;
Y(i) = y;
Z(i) = z;
T(i) = t;
% drawnow
% plot3(DesiredX, DesiredY, DesiredZ, 'r')
% hold on
% plot3(X, Y, Z, 'b')
t = t + dt;
end
Figure1 = figure(1);
set(Figure1,'defaulttextinterpreter','latex');
%set(Figure1,'units','normalized','outerposition',[0 0 1 1]);
subplot(2,2,1)
plot(T, ErrorX, 'LineWidth', 2)
title('Error in $x$-axis Position (m)')
xlabel('time (sec)')
ylabel('$x_{d}(t) - x(t)$', 'LineWidth', 2)
subplot(2,2,2)
plot(T, ErrorY, 'LineWidth', 2)
title('Error in $y$-axis Position (m)')
xlabel('time (sec)')
ylabel('$y_{d}(t) - y(t)$', 'LineWidth', 2)
subplot(2,2,3)
plot(T, ErrorZ, 'LineWidth', 2)
title('Error in $z$-axis Position (m)')
xlabel('time (sec)')
ylabel('$z_{d} - z(t)$', 'LineWidth', 2)
subplot(2,2,4)
plot(T, ErrorPsi, 'LineWidth', 2)
title('Error in $\psi$ (m)')
xlabel('time (sec)')
ylabel('$\psi_{d} - \psi(t)$','FontSize',12);
grid on
Figure2 = figure(2);
set(Figure2,'units','normalized','outerposition',[0 0 1 1]);
figure(2)
plot3(X,Y,Z, 'b')
grid on
hold on
plot3(DesiredX, DesiredY, DesiredZ, 'r')
pos = get(Figure2,'Position');
set(Figure2,'PaperPositionMode','Auto','PaperUnits','Inches','PaperSize',[pos(3),pos(4)]);
print(Figure2,'output2','-dpdf','-r0');
legend('actual', 'desired')
The code of the desired trajectory is
clear all; clc;
fileID = fopen('xyTrajectory.txt','w');
angle = -pi; radius = 5; z = 0; t = 0;
for i = 1:6000
if ( z < 2 )
z = z + 0.1;
x = 0;
y = 0;
end
if ( z >= 2 )
angle = angle + 0.1;
angle = wrapToPi(angle);
x = radius * cos(angle);
y = radius * sin(angle);
z = 2;
end
X(i) = x;
Y(i) = y;
Z(i) = z;
fprintf(fileID,'%f \t %f \t %f\n',x, y, z); end
fclose(fileID); plot3(X,Y,Z) grid on
|
How would you guys recommend making a variable ballast system for an underwater robot? I was thinking about this problem earlier and I was trying to figure out if there was a way to make one that didn't require a tank of compressed air.
|
I am looking for a library for disparity map / stereo vision computation. These are my requirements:
C++
Multi-platform (Linux, Windows, OSX)
(preferrable but not mandatory) not CUDA based
Suited for robotics (e.g. it should work even if the images are not perfectly rectified and the cameras are not perfectly calibrated)
Suitable for tracking purposes (20fps or more)
Performing even with low-res images (e.g. 320x240px)
Open Source
|
I have been looking at CCD and CMOS sensors and cameras to decide which one to use in the process of automatic control of a printing process. By now I am getting the grips on almost all the essential numbers and abbreviations but there remains a problem with shutters.
I understand that there are different types of shutters, both mechanical and electronic, and I can understand how they work. My problem concerns shutter speed. If I use a mechanical shutter, well then the maximum shutter speed depends on that particular element in the assembly, but how does it work for electronic shutters? I have never read "Max shutter speed" in any specs. The only thing I usually see floating around are frames per second. But those do usally not pass a limit of about 120 fps. Depending on how the sensor it is built one could think that the maximum shutter speed therefore is 1/120 or 1/240 if it uses half frames.
Can this be right? It seems really slow. I will be faced with the task of recording crisp and clear images of paper which moves at about 17 m/s. That is never possible with shutter speeds that slow. Will I be forced to use a mechanical shutter or am I misunderstanding something?
|
I am investigating a possible business opportunity in which quadcopters perform high-precision nutritional delivery via a burrito medium. I have never used a burrito, but I have read on the internet that they typically weigh 600-700 grams (1). This is much too heavy for commercially available platforms.
How many quadcopters would it take to lift a single burrito?
(1): https://www.facebook.com/chipotle/posts/390817319252
|
iPhone contains
Gyroscope
GPS
Two photo and video cameras
Self-sufficient battery that outlives the motor battery
Wifi
Backup connectivity (cellular, bluetooth)
Programmable computer
Real-time image processing capabilities and face detection
General purpose IO (with something like this)
and old models are available very cheap.
What is the main benefit of having a separate dedicated flight controller and camera on hobbyist rotorcraft rather than a general purpose device like the iPhone?
|
I think I have just found another bug - there was one that was mentioned in another post about the angle and distance. This one is about reading the encoder's counts. I was using them as a workaround for the other bugs but what I found in one instance is that the counts I was reading from the right encoder were incorrect. I was reading in a loop sleeping for 100msec while turning the create2. Here is part of the counts where it definitely shows a problem:
32767
-32763
32766
-32768
This kept on going until I stopped. It seems that it has a problem when it reaches the max.
Has anyone else ran into this or can explain or provide another workaround?
|
I'm a robotics student but very new to this field.
Can you suggest any websites which provide projects/helpful info that I can learn from?
Thanks
|
I'm looking to build a sensor which will detect the level of liquid in a tube.
It does not have to be precisely accurate, just detect whether the level is approximately above a certain height.
The liquid level can be seen in the red oval
I thought about monitoring this with a webcam and using opencv to detect the liquid level. but this seemed a bit overkill. Especially if I have to have a dedicated PC to process the images.
Surely there's a simpler solution.
Perhaps a component I can attach to a raspberry PI or arduino board ...
I'm not very familiar with laser sensors so I don't know what is suitable.
As long as it's reliable ...
EDIT
I should add that the tube contains toluene which is flammable, and it is vacuum sealed. So we can't just drill into it. Some kind of optical/laser sensor might be OK, as long as it can recognise a clear liquid.
|
I'm in charge of a module to control the smoothness with which a platform should move; the platform already implements a closed-loop control on its own but this firmware is closed and I do not have access to source code.
It is therefore requested that a closed loop control should implemented on top of that PID, in a superior layer, above a module that already implements a closed loop control, so i have several question:
It's conceptually correct implement a PID control in an upper layer to closed loop control that implements it?
What features may be loose in the lower close loop?
Maybe loop control closed negatively be influenced by the PID that implements the top layer?
Estimate The angular speed, yaw and pitch, based on the position of the motors using Kalman filters can generate values too far from the actual values reported
|
I have a quadcopter built, and I need to be able to make it to autonomously follow a route and avoid obstacles where possible.
My general plan is to have an array of sensors on a pre-defined "front". The quadcopter will only go forward. Generally I'd like to make it so that if the sensors pointing at a higher angle detect something getting closer as the bot moves forward, the quadcopter will stop, descend until the distance to that detected object decreases, and then continues forward. Similarly, I'd like the opposite event to happen if the sensors pointing at a lower angle detect something getting closer to the quadcopter.
I'm thinking of having something like 9 small infrared distance detectors (pointing up, forward, down || left, forward, right), basically a 3x3 matrix.
Would anyone have any ideas of the feasibility of this? I'd like to use a raspberry pi, but it will probably also need an additional board to read in the values from its sensors. In addition, I have no idea which sensors to use, or if infrared can even work. Any suggestions are more than welcome.
I was also thinking about ultrasonic sensors, but having 9 of them could get cluttered, and I'd worry about their short range when a crash means death for the quadcopter. I also fear they would cause interference with each other.
|
These are the specifications of the motor:
25000RPM no load speed at 12V
No Load Current - 1A, Stall Current - 10A
0.36Kgcm torque
What is the definition of load current and load speed? Which battery would be most suitable to power this motor?
|
As the title states, is there any way to make a following drone that tracks a GPS unit, and follows/orients camera to that? Similar to this
|
Let's assume I have the following situation, and need to find (x,y).
Is it possible? There does not appear to be more than one solution to the system, but my trigonometry is a bit rusty.
I feel like I need one more distance.
|
I'm working on a project where I need to model a system that is essentially comprised of a series of ball-and-socket joints attached to a base, which is attached in turn to a prismatic joint (rail).
I've read Roy Featherstone's Rigid Body Dynamics Algorithms cover-to-cover, and I've also read the Dynamics section from the Springer Handbook of Robotics (also written by Featherstone).
It took me a long time to get acclimated to using his "spatial vector" and "spatial matrix" notation, but after re-creating all of his notation by hand as an exercise it works out to just be a nice way of concatenating 3x3 and 3x1 matrices and vectors into 6x6 and 6x1 matrices and vectors. The maths he invents to perform operations can be a bit tedious to read as he hijacks some standard notation, but overall everything is very compact, very easy to implement in MATLAB.
My problem is this: How do I add actuators to the model? He walks through explicitly configuring the joint definitions, link definitions, etc., but when it comes to actuators or applied forces he says something like, "Just add a $\tau_a$ here and Bob's your uncle!" - it's not discussed at all. In the Handbook of Robotics he suggests introducing a false acceleration to the fixed base to add the gravitational force term, but doesn't show how to add it in local coordinates nor does he mention how to add the actuator input.
Any help would be greatly appreciated. I've considered starting over with a different book, but it's going to be a great expense of my time to re-acclimate myself to a different set of notation. I'd like to move forward with this, but I feel like I'm just a few inches shy of the finish line.
|
In terms of robotics, what are the differences between odometry and dead-reckoning?
I read that odometry uses wheel sensors to estimate position, and dead-reckoning also uses wheel sensors, but "heading sensors" as well. Can someone please elaborate on this point for me?
|
I've posted a question regarding this matter that I couldn't solve. I'm reading this paper, the authors state
Linear $x$ and $y$ Motion Control: From the mathematical model one can see
that the motion through the axes $x$ and $y$ depends on $U_{1}$. In fact $U_{1}$ is
the total thrust vector oriented to obtain the desired linear motion.
If we consider $U_{x}$ and $U_{y}$ the orientations of $U_{1}$ responsible for the
motion through x and y axis respectively, we can then extract from
formula (18) the roll and pitch angles necessary to compute the
controls $U_{x}$ and $U_{y}$ ensuring the Lyapunov function to be negative
semi-definite ( see Fig. 2).
The paper is very clear except in the linear motion control. They didn't explicitly state the equations for extracting the angles. The confusing part is when they say
we can then extract from
formula (18) the roll and pitch angles necessary to compute the
controls $U_{x}$ and $U_{y}$
where formula (18) is
$$
U_{x} = \frac{m}{U_{1}} (\cos\phi \sin\theta \cos\psi + \sin\phi \sin\psi) \\
U_{y} = \frac{m}{U_{1}} (\cos\phi \sin\theta \sin\psi - \cos\phi \cos\psi) \\
$$
It seems to me that the roll and pitch angles depend on $U_{x}$ and $U_{y}$, therefore we compute the roll and pitch angles based on the $U_{x}$ and $U_{y}$ to control the linear motion.
|
I am working on a project about robot soccer vision.
How I utilize two webcams as a stereo vision in matlab for robot soccer matters?
|
I've been toying around with the idea of automating the process of testing aquarium water for certain chemicals. Very briefly, salt water aquariums (reefs, specifically) require almost-daily testing for 3-4 chemicals (calcium, alkalinity, ammonia, phosphate). This is typically done by hand, using various kits. There are two main types
you combine several powders with a fixed amount of aquarium water, and then compare the color the mixture turns with a chart
you combine several liquids together with the aquarium water, and then add another liquid until the mixture turns a color. you then record how much of the final liquid you had to add for the color change to occur (titration).
Both methods are straightforward, but tedious. To maintain an aquarium well, you really do need daily readings of all of those metrics, which easily adds up to 30 minutes+ daily.
So - I'd like to be able to automate the process. The biggest question is, how do I reliably dispense the materials needed? We're talking in gram and milliliter UoM here. The kits come with plastic syringes and spoons of correct volume for the powders. I need a way to measure out and dispense both of these, and a way to queue up several days worth (refilling daily defeats the purpose).
Any ideas?
Edit this is different from How to measure and dispense a finite amount of powder or liquid because of the units of measure involved. I need to be able to reliably dispense ~ 1g +/- 5% of a powder, or 1ml +/- 5% of liquid.
|
Well, I will start directly in my problem. I'm working on a project and I only have 10 days left.
The idea is simple, a wheeled robot with 3 ultrasonic sensors to avoid obstacles.
I've developed a code and it's working fine.
I'm using: Arduino Uno, L293D driver for the 2 dc motors, 3 HC-SR04 ultrasonic sensors and the Newping library.
I've made some kind of a shield where I soldered common points for gnd and 5V in order to connect the L293 ic and the sensors pins easily. The problem is that the ultrasonic sensors only functioned once in the expected behavior! After that they were always sending the zero result and sometimes a number is showed when i disconnect the sensor!
Is it a power problem? I'm using the usb cable to power the arduino and the sensors (motors are powered using 2 Li-po batteries)
kindly provide me with guidance
|
Suppose I have perfect AI to control robotic arm.
What characteristics should it fulfill to be able to take such common tools as screwdriver and linesman's and disassemble and then assemble conventional notebook computer?
Are there such models available?
Is seems to me, that such arms as OWI-535 are only toys, i.e. they can just relocate lightweight objects and that's all. Am I right?
UPDATE
Also suppose that my AI can look at assembly area with multiple HD cameras and can perfectly "understand" what is sees.
|
I have 2 of these 12v motors and a 12v battery
http://www.enigmaindustries.com/Motors/Bosch_EV_Warrior.htm
I would like to know what the best solution for controlling this motor with an Arduino Uno would be.
Does the motor controller need to have a maximum current of 100A?
I though of a 100A transistor connected to the pwm pin of arduino, and then, control the motor with pwm.
Is voltage regulator better than pwm?
|
I want to make a compact (actuators motors and sensors are all in one) soft robot. Actuators can be pneumatic or dielectric. I need suggestions about manufacturating. I'm open to new ideas.
|
Currently designing a spherical wrist, I want to manipulate a 300gr payload.
the design has a 200mm span, so I'm guessing at a 1.1Nm (considering the weight of structure & motors).
I've looked at Maxon, Faulhaber, but can't find any motor+gearbox+encoder under a 100gr.
Any suggestion ?
|
I'm looking into CCTV, and am interested in minimising costs by having a motion tracking camera cover an area that would otherwise utilise multiple cameras.
There is already something like this on the market manufactured by a company called NightWatcher (I think). However, it does not track, it merely senses using 3 PIR's and points the camera in 1 of 3 positions. Ping ponging between them if the subject is between sensors.
I like the idea of this, but not the drawbacks, and was wondering if there was anything I could do with an arduino or similar to achieve a better result.
I stumbled across this, but am not entirely sure about it. Also this is for outside application, and that thread is for indoor (if that makes a difference).
https://robotics.stackexchange.com/a/1397/9751
Edit...
Just in case I have mislead you, I want to have a unit where sensors detect movement and then a camera to face that position.
|
I'd like to buy a small Vacuum Lifter so that I can move playing cards around with robotics. But my "google-fu" is failing me. I don't really know what search terms to look for... or what webpages to look to find this kind of component.
In essence, I want an electronic version of a Vacuum Pen.
I don't really know where to search for this kind of component. I've found pneumatic valves and other complicated machinery... but ideally I'd want a self-contained electronic vacuum pen. Since I'm only planning to move playing cards around.
Anyone have an idea where to look for something like this? Thanks.
|
i want to make 3d position and position change estimation from photos taken from flying robot. I need suggestions for fast photo matching.
|
Folks at programmers stack exchange asked me ask here:
I want to communicate with an arduino and sent integers it. I code this program in C++. I initialy used bulk_transfer(), but it sends only char data.
This in the API reference for libusb:
http://libusb.org/static/api-1.0/group__syncio.html
Here is the prototype of bulk_transfer()
int libusb_bulk_transfer (struct libusb_device_handle *dev_handle, unsigned char endpoint, unsigned char *data, int length, int *transferred, unsigned int timeout)
As you can see, data is an unsigned char pointer, that is, a pointer to a buffer containing length unsigned chars. I can successfully transcieve strings. How do I transfer integers with sign?
Currently I am thinking about a system in which the arduino asks for the digit by sending a character and my program sends the number as reply followed by the sign, which is requested next. Is this solution viable? Or should I transfer the integer as a string? Is there a better way?
|
Lately, if you notice I have posted some questions regarding position tracking for nonlinear model. I couldn't do it. I've switched to linear model, hope I can do it. For regulation problem, the position control seems working but once I switch to tracking, the system starts oscillating. I don't know why. I have stated what I've done below hope someone guides me to the correct path.
The linear model of the quadrotor is provided here which is
$$
\begin{align}
\ddot{x} &= g \theta \ \ \ \ \ \ \ \ \ \ (1)\\
\ddot{y} &= - g \phi \ \ \ \ \ \ \ \ \ \ (2)\\
\ddot{z} &= \frac{U_{1}}{m} - g \\
\ddot{\phi} &= \frac{L}{J_{x}} U_{2} \\
\ddot{\theta} &= \frac{L}{J_{y}} U_{2} \\
\ddot{\psi} &= \frac{1}{J_{z}} U_{2} \\
\end{align}
$$
In this paper, the position control based on PD is provided. In the aforementioned paper, from (1) and (2) the desired angles $\phi^{d}$ and $\theta^{d}$ are obtained, therefore,
$$
\begin{align}
\theta^{d} &= \frac{\ddot{x}^{d}}{g} \\
\phi^{d} &= - \frac{\ddot{y}^{d}}{g}
\end{align}
$$
where
$$
\begin{align}
\ddot{x}^{d} &= Kp(x^{d} - x) + Kd( \dot{x}^{d} - \dot{x} ) \\
\ddot{y}^{d} &= Kp(y^{d} - y) + Kd( \dot{y}^{d} - \dot{y} ) \\
U_{1} &= Kp(z^{d} - z) + Kd( \dot{z}^{d} - \dot{z} ) \\
U_{2} &= Kp(\phi^{d} - \phi) + Kd( \dot{\phi}^{d} - \dot{\phi} ) \\
U_{3} &= Kp(\theta^{d} - \theta) + Kd( \dot{\theta}^{d} - \dot{\theta} ) \\
U_{4} &= Kp(\psi^{d} - \psi) + Kd( \dot{\psi}^{d} - \dot{\psi} ) \\
\end{align}
$$
with regulation problem where $x^{d} = 2.5 m, \ y^{d} = 3.5 m$ and $z^{d} = 4.5 m$, the results are
Now if I change the problem to the tracking one, the results are messed up.
In the last paper, they state
A saturation function is needed to ensure that the reference roll and
pitch angles are within specified limits
Unfortunately, the max value for $\phi$ and $\theta$ are not stated in the paper but since they use Euler angles, I believe $\phi$ in this range $(-\frac{\pi}{2},\frac{\pi}{2})$ and $\theta$ in this range $[-\pi, \pi]$
I'm using Euler method as an ODE solver because the step size is fixed. For the derivative, Euler method is used.
This is my code
%######################( PD Controller & Atittude )%%%%%%%%%%%%%%%%%%%%
clear all;
clc;
dt = 0.001;
t = 0;
% initial values of the system
x = 0;
dx = 0;
y = 0;
dy = 0;
z = 0;
dz = 0;
Phi = 0;
dPhi = 0;
Theta = 0;
dTheta = 0;
Psi = pi/3;
dPsi = 0;
%System Parameters:
m = 0.75; % mass (Kg)
L = 0.25; % arm length (m)
Jx = 0.019688; % inertia seen at the rotation axis. (Kg.m^2)
Jy = 0.019688; % inertia seen at the rotation axis. (Kg.m^2)
Jz = 0.039380; % inertia seen at the rotation axis. (Kg.m^2)
g = 9.81; % acceleration due to gravity m/s^2
errorSumX = 0;
errorSumY = 0;
errorSumZ = 0;
errorSumPhi = 0;
errorSumTheta = 0;
pose = load('xyTrajectory.txt');
% Set desired position for tracking task
DesiredX = pose(:,1);
DesiredY = pose(:,2);
DesiredZ = pose(:,3);
% Set desired position for regulation task
% DesiredX(:,1) = 2.5;
% DesiredY(:,1) = 5;
% DesiredZ(:,1) = 7.2;
dDesiredX = 0;
dDesiredY = 0;
dDesiredZ = 0;
DesiredXpre = 0;
DesiredYpre = 0;
DesiredZpre = 0;
dDesiredPhi = 0;
dDesiredTheta = 0;
DesiredPhipre = 0;
DesiredThetapre = 0;
for i = 1:6000
% torque input
%&&&&&&&&&&&&( Ux )&&&&&&&&&&&&&&&&&&
Kpx = 90; Kdx = 25; Kix = 0.0001;
errorSumX = errorSumX + ( DesiredX(i) - x );
% Euler Method Derivative
dDesiredX = ( DesiredX(i) - DesiredXpre ) / dt;
DesiredXpre = DesiredX(i);
Ux = Kpx*( DesiredX(i) - x ) + Kdx*( dDesiredX - dx ) + Kix*errorSumX;
%&&&&&&&&&&&&( Uy )&&&&&&&&&&&&&&&&&&
Kpy = 90; Kdy = 25; Kiy = 0.0001;
errorSumY = errorSumY + ( DesiredY(i) - y );
% Euler Method Derivative
dDesiredY = ( DesiredY(i) - DesiredYpre ) / dt;
DesiredYpre = DesiredY(i);
Uy = Kpy*( DesiredY(i) - y ) + Kdy*( dDesiredY - dy ) + Kiy*errorSumY;
%&&&&&&&&&&&&( U1 )&&&&&&&&&&&&&&&&&&
Kpz = 90; Kdz = 25; Kiz = 0;
errorSumZ = errorSumZ + ( DesiredZ(i) - z );
dDesiredZ = ( DesiredZ(i) - DesiredZpre ) / dt;
DesiredZpre = DesiredZ(i);
U1 = Kpz*( DesiredZ(i) - z ) + Kdz*( dDesiredZ - dz ) + Kiz*errorSumZ;
%#######################################################################
%#######################################################################
%#######################################################################
% Desired Phi and Theta
%disp('before')
DesiredPhi = -Uy/g;
DesiredTheta = Ux/g;
%&&&&&&&&&&&&( U2 )&&&&&&&&&&&&&&&&&&
KpP = 20; KdP = 5; KiP = 0.001;
errorSumPhi = errorSumPhi + ( DesiredPhi - Phi );
% Euler Method Derivative
dDesiredPhi = ( DesiredPhi - DesiredPhipre ) / dt;
DesiredPhipre = DesiredPhi;
U2 = KpP*( DesiredPhi - Phi ) + KdP*( dDesiredPhi - dPhi ) + KiP*errorSumPhi;
%--------------------------------------
%&&&&&&&&&&&&( U3 )&&&&&&&&&&&&&&&&&&
KpT = 90; KdT = 10; KiT = 0.001;
errorSumTheta = errorSumTheta + ( DesiredTheta - Theta );
% Euler Method Derivative
dDesiredTheta = ( DesiredTheta - DesiredThetapre ) / dt;
DesiredThetapre = DesiredTheta;
U3 = KpT*( DesiredTheta - Theta ) + KdP*( dDesiredTheta - dTheta ) + KiT*errorSumTheta;
%--------------------------------------
%&&&&&&&&&&&&( U4 )&&&&&&&&&&&&&&&&&&
KpS = 90; KdS = 10; KiS = 0; DesiredPsi = 0; dDesiredPsi = 0;
U4 = KpS*( DesiredPsi - Psi ) + KdS*( dDesiredPsi - dPsi );
%###################( ODE Equations of Quadrotor )###################
ddx = g * Theta;
dx = dx + ddx*dt;
x = x + dx*dt;
%=======================================================================
ddy = -g * Phi;
dy = dy + ddy*dt;
y = y + dy*dt;
%=======================================================================
ddz = (U1/m) - g;
dz = dz + ddz*dt;
z = z + dz*dt;
%=======================================================================
ddPhi = ( L/Jx )*U2;
dPhi = dPhi + ddPhi*dt;
Phi = Phi + dPhi*dt;
%=======================================================================
ddTheta = ( L/Jy )*U3;
dTheta = dTheta + ddTheta*dt;
Theta = Theta + dTheta*dt;
%=======================================================================
ddPsi = (1/Jz)*U4;
dPsi = dPsi + ddPsi*dt;
Psi = Psi + dPsi*dt;
%=======================================================================
%store the erro
ErrorX(i) = ( x - DesiredX(i) );
ErrorY(i) = ( y - DesiredY(i) );
ErrorZ(i) = ( z - DesiredZ(i) );
ErrorPsi(i) = ( Psi - 0 );
X(i) = x;
Y(i) = y;
Z(i) = z;
T(i) = t;
t = t + dt;
end
Figure1 = figure(1);
set(Figure1,'defaulttextinterpreter','latex');
subplot(2,2,1)
plot(T, ErrorX, 'LineWidth', 2)
title('Error in $x$-axis Position (m)')
xlabel('time (sec)')
ylabel('$x_{d}(t) - x(t)$', 'LineWidth', 2)
subplot(2,2,2)
plot(T, ErrorY, 'LineWidth', 2)
title('Error in $y$-axis Position (m)')
xlabel('time (sec)')
ylabel('$y_{d}(t) - y(t)$', 'LineWidth', 2)
subplot(2,2,3)
plot(T, ErrorZ, 'LineWidth', 2)
title('Error in $z$-axis Position (m)')
xlabel('time (sec)')
ylabel('$z_{d} - z(t)$', 'LineWidth', 2)
subplot(2,2,4)
plot(T, ErrorPsi, 'LineWidth', 2)
title('Error in $\psi$ (m)')
xlabel('time (sec)')
ylabel('$\psi_{d} - \psi(t)$','FontSize',12);
grid on
Figure2 = figure(2);
set(Figure2,'units','normalized','outerposition',[0 0 1 1]);
figure(2)
plot3(X,Y,Z, 'b')
grid on
hold on
plot3(DesiredX, DesiredY, DesiredZ, 'r')
pos = get(Figure2,'Position');
set(Figure2,'PaperPositionMode','Auto','PaperUnits','Inches','PaperSize',[pos(3),pos(4)]);
print(Figure2,'output2','-dpdf','-r0');
For the trajectory code
clear all;
clc;
fileID = fopen('xyTrajectory.txt','w');
angle = -pi;
radius = 3;
z = 0;
t = 0;
for i = 1:6000
if ( z < 2 )
z = z + 0.1;
x = 0;
y = 0;
end
if ( z >= 2 )
angle = angle + 0.1;
angle = wrapToPi(angle);
x = radius * cos(angle);
y = radius * sin(angle);
z = 2;
end
X(i) = x;
Y(i) = y;
Z(i) = z;
fprintf(fileID,'%f \t %f \t %f\n',x, y, z);
end
fclose(fileID);
plot3(X,Y,Z)
grid on
|
I'm new to robotics and this is my first time building a quadcopter. I'm unable to work out why I keep losing ESCs.
Most recently in testing, I've managed to calibrate all 4 ESCs and accurately control the speed of all 4 motors. But after neatly securing them to the frame, 1 motor didn't work. I recalibrated the ESCs again and, when running them again, the motor still didn't work. However, the other 3 motors continued to run at first, but also suddenly just stopped altogether.
Research suggested that ESCs have a cut-off voltage, indicating that my battery might be too flat, so I immediately looked to recharging it. To my surprise, the (still very new) battery appeared to have bulged out, indicating that it had been damaged.
Further research suggested that the size of the battery I was using is insufficient for the amount of current drawn by the motors. So, without any PWM applied, I reconnected a new fully charged battery in the hope of listening for any beeps to diagnose, and one ESC immediately coughed up a huge puff of smoke.
Before all of this happened, I only managed to get 2 of the ESCs to run their motors. Despite several attempts at tweaking PWM signals and calibrating them, I ended up replacing the other 2.
Unless there's some obvious reason for my ESCs to keep dying on me, I can only assume that these specific ESCs are badly made and I should ask for my money back.
These are the components I'm using:
Raspberry Pi 2 Model B
Adafruit 16-Channel 12-bit PWM/Servo Driver - I2C interface - PCA9685
RCTimer Mini ESC 40A OPTO BLHeli Firmware (Oneshot125, Support 2-6S)
RCTimer 2208-8 2600kV Outrunner Brushless Motor
Gens ace 2200mAh 11.1V 25C 3S1P Lipo Battery Pack
The Raspberry Pi is powered through its micro USB interface by a 5V Step-Up Voltage Regulator connected to a 5000mAh 3.7V LiPo battery. The PWM Controller is powered to its Vcc pin by the GPIO1 (3.3V) pin from the Raspberry Pi, that also happens to power other sensors.
At the time (when all 4 motors worked), I was able to accurately control them at either 50Hz or 400Hz with 1-2 millisecond duty cycles.
|
I am using DC motors to build a robotic arm. I want to make the base shoulder (which rotates and lifts) more stable and stronger. How should I design this using DC motors?
Also I would like to put the motor for the elbow in the base for efficiency. Which design best suits this?
UPDATE
I am building a robotic arm for a payload of approx. 1-2 kg and using DC high torque motors. In this model, I am using only a shoulder with a gripper. The gripper is self made by me weighing approximately 400 grams. I want to have a proper design and material choice so that the shoulder part remains less heavy and more stable.
In addition to this I want to operate the movement of the gripper, i.e. the up and down motion, by using the motor in the base part. What should be my design and better alternative?
|
I'd like to create a camera slider similar to this one.
The only part I'm not sure on is how to setup the camera drive. It looks like I can buy a similar timing belt here, but I'm not sure how to set up the servo to drive the slider. Particularly how to keep the belt in contact with the drive pulley.
My fabrication skills are very limited so I need a simple or out of the box solution.
|
There is too many 2d position estimation with one camera. Is there any 3d position estimation application or technique with one camera? If there is no application or technique why?
|
I've already made an Arduino device which detects the trigger event, but now I want it to trigger the recording and storage of video when this event occurs. If the camera could be wirelessly triggered a few feet away from the Arduino unit, that would be optimal, but I can settle for running wires if need be.
I'm looking for suggestions because I'm on a limited budget for this project. I want to avoid reinventing the wheel and ordering parts which I can't get to work with an Arduino.
I'm considering the use of this camera.
https://www.sparkfun.com/products/11418
This is my first Arduino project. Any help is very welcome.
|
Are there good low cost cameras that are frequently used in robotics?
I am assuming there are cameras that are good fit for robotics ...
Works well with OpenCV
PC Windows support - USB2/USB3 (GigE, USB3 vision cameras seem pricey)
Good image sensing performance
Adjustable focus - manual or motorized (fine focus control would be great)
Do IP cameras make good cameras for Robotic vision projects?
|
I would like to make a robotic system which takes as input a video feed, runs some GPU-based image recognition on the video, and outputs commands to a set of motors. The goal is to have the motors react to the video with as little latency as possible, hopefully of the order of 10s of ms. Currently I have a GTX 770m on a laptop running Ubuntu 14.04, which is connected to the camera and doing the heavy image processing. This takes frames at 30Hz and will output motor commands at the same frequency.
After a few days of looking around on the web for how to design such a system, I'm still at a loose end whether (a) it is even feasible (b) if so, what the best approach is to interface the laptop with the motors? The image processing must run on Linux, so there is no leeway to change that part of things.
|
I am learning and I am interested robotics, but also I need to update my web development skills so the question is - is there any idea for good web application that could be connected with robotics - service robots, industrial robots etc. Maybe there already is some open source ongoing web application projects for robotics in which I can make contribution.
Thanks!
|
What is the difference between Multiple robots and swarm robots? What is the key point? Also what is multi agent systems? Do multi agent systems works only for computer simulations or games? These terms are used similar applications.
|
Are there electric motors, which apply force not in rotational motion, but in longitudinal motion?
They should have electromagnetic design and no gears and worms.
Such motors would be great for linear actuators. They could transfer both force and feedback.
What is the name of such devices?
|
I'm working with a 4DOF Parallel-Mechanism arm. I'm interested in writing planners for this arm (PRM or RRT) in the configuration space, but I'm not sure how to identify obstacles/collisions.
When writing planners for mobile robots in a 2d workspace, it was easy to define and visualize the workspace and obstacles in which the planner/robot was operating. This website (link) shows a great example of visualizing the workspace and configuration space for a 2DOF arm, but how can I do this for higher dimensions?
|
I'm curious about this alloy and how they say it can be used as an alternative to a traditional compressor. Can anyone explain how this would work?
My goal is to understand that use case so I can adapt alloys in other robotic projects. My gut tells me this is perfect for some kinematics, or other mechanisms, but I'm missing some pieces in this puzzle (how would it work?)
|
123D software can construct a 3D model from photos taken from your phone. It doesn't process the photos in your phone. Instead, it sends them to the cloud to create 3d model. How can i construct a 3d model like this (only with one camera)? I searched it but i can only find information on laser/procetor scanners (simple and desktop use only). I think 123D uses only IMU sensors and camera why do they use the cloud? Can a beaglebone or rasperry pi create 3d models like this?
|
I am reading the book "Introduction to Robotics Mechanics & Control", John J Craig., 3rd Ed., Forward transformation problem Examples 2.2 and 2.4.
Ex. 2.2 (Page 29): Frame {B} is rotated relative to frame {A} about X axis by 60 degrees clockwise, translated 20 units along Y axis and 15 units along z axis. Find P in frame {A} where P in frame {b} = [0 8 7]
The book's answer is [0.18.062 3.572].
But my answer is [0 30.062 11.572].
Ex. 2.4 (Page 33): Vector P1 has to be rotated by 60 degrees clockwise, about X axis and translated 20 units along Y axis, and 15 units along Z axis. If P1 is given by [0 8 7], find P2.
Essentially Ex.2.2 and 2.4 are the same problem. However, the Transformation matrix for Ex 2.4, has [0 8 7] as translation vector (The 4th column of T) instead of [0 20 15]. And, the given answer is [0.18.062 3.572].
I am not sure if it is just typo, or I am missing some genuine operation. Please let me know your opinion.
Thanks.
|
I'm trying to understand the scan-matching part of Hector SLAM (PPT summary). It seems a little difficult to understand, in some cases, how is it possible to actually perform the alignment of the scans. Can anyone explain about it?
In my case, I'm working with a simulation. I'm moving my robot in a corridor-like featureless environment (only two walls) and I don't get a map. Nevertheless, if I move in a sinewave motion, I'm able to get a map. Moreover, if I have an additional feature, the algorithm even shows the real path as long as this feature is seen (right part of the image), otherwise it shows a very weird-looking oscillatory path which does not resemble a sinewave at all. Something important to notice is that the width of the map is pretty accurate (real=4m, map's=4.014m), and the length of the movement is also somehow accurate (real=15m, map's= 15.47). I'm using a Hokuyo URG-04LX laser range finder, no odometry, no IMU. I'm running in Ubuntu 14.04 and using ROS Indigo.
I more or less understand how Hector works, but I have no idea about why I'm getting this map and specially trajectory.
Thank you.
|
This paper mentioned the fingerprinting/model matching case. But I could not find an image based algorithm. Any suggestion about image based localization
|
I wish to build a chess playing robot with robot arm as shown on youtube, can anyone please tell me which robot arm would suit my purpose and whether it can be bought second hand or alternatively anybody willing to sell used chess arm robot? Please help out.
|
I'm relatively new to robotics, and I'm building a project for which I need a simple wireless connection between two circuits such that when the first circuit is switched on, the other circuit gets switched on too. I'm looking to preferably build something like this on my own, but I have no idea about wireless connections. I only know basic wired robotics. I also know C++ programming if that helps. Apologies if such a question has already been asked.
Regards,
Hanit Banga
|
I'm looking for some cheap hardware that would offer me results decent enough to continue my experimentation.
I've been looking into how to obtain hardware for learning about stereo vision and 3D reconstruction, I found two basic ways: - buy 2 cheap webcams and DIY - buy a stereo camera
For what I understood little variations in distance and inclination can easily compromise the diff map and so the DIY version might end up requiring constant calibrations, however on the other end, so buying "professional" stereo camera range from 500 euro to infinite.
For the moment I trying something in between, like the minoru 3d, however the overall performance of the camera looks a bit poor also because it's a 2009 product, however I can't find any more recent product offering a similar solution.
Can you suggest me what would be the best way/product/guide to archive decent results without spending a fortune ?
Thank you very much :)
|
Does anybody know where I can get a matlab toolbox or functions to work with a SICK laser-scanner (Windows OS)? I'm using a SICK-LDRS2110 with ethernet cable, but SOPAS software does not allow me to program recording times and other specific tasks. Any tips are more than welcome!
Thanks!
|
after having been determined my control loops for my quadcopter project, I'm going to determine the motor commands (PWM duty cycle) from the motor forces/torques. I was following the guidelines of this document but when I was trying to do the inverse of the matrix M (page 17) it has determinant equal to 0. The procedure is correct? Anyone can suggest me some other link for doing this conversion? I have searched in the Internet but I haven't found so much about that. Thanks
The part of the document that I'm referring is the following:
|
I just un-boxed and set the Create 2 to charge overnight.
How do I program it? Where is the software?
|
I am doing a line following robot based on opencv. I have my onboard computer(an old pandaboard) running opencv. It will calculate the offset from the required path and communicate it to the arduino via USB. Then it will do PID optimisation on the data, and adjust the speed of the left and right motors.
To my dismay the communication part is not working, and I've tried hard for a day to fix it with no result. Here is the relavent code running on the pandaboard:
while(1)
{
r = libusb_bulk_transfer(dev_handle, 131, recieve, 1, &actual, 0);
cout<<"r="<<r<<endl;
int a;
cin>>a;
imgvalue=krish.calc_offset();
send[0]=imgvalue&0xff;
send[1]=imgvalue>>8;
//make write
cout<<"Data to send->"<<imgvalue<<"<-"<<endl; //just to see the data we want to write : abcd
cout<<"Writing Data..."<<endl;
r = libusb_bulk_transfer(dev_handle, (4 | LIBUSB_ENDPOINT_OUT), send, 2, &actual, 0); //my device's out endpoint was 2, found with trial- the device had 2 endpoints: 2 and 129
if(r == 0 && actual == 2) //we wrote the 4 bytes successfully
cout<<"Writing Successful!"<<endl;
else
cout<<"Write Error"<<endl;
}
where imgvalue is the data to be send. This is the code running on the Arduino:
void loop()
{
Serial.write('s');
if(Serial.available()>0)
Input_tmp = Serial.read();
if(Serial.available()>0)
Input_tmp = Input_tmp | (Serial.read() << 8);
Input=Input_tmp;
myPID.Compute();
// adjust the motor speed
}
What happens when I run is that it will pause at the libusb read operation as the timeout is zero(infinity). At this point I've tried resetting the arduino, but this doesn't help. So how do I make my program respond to this start byte send my the Arduino? Where did I go wrong?
|
I want to detect and identify each of the vehicles passing through a gate.
I have the live video feed of the gate which I initially thought to process and detect the number plates with the help of OpenCV or any other graphics library freely available. The problem is, the size of number plates may vary very widely, and the language the number plates are written with(Bengali) does not have a good OCR performance at all.
The next idea was to put a QR code in the windshield of the vehicles. (Yes the vehicles supposed to enter the area are private and enlisted vehicles). But I am not confident that I will be able to detect and identify all the QR codes in real time with 100% accuracy, as the QR codes might get pixelated due to low resolution of video.
So can anyone suggest any other cheap way we can adopt to detect and identify the vehicles? Can NFC or any other cheap sensors be used for this purpose?
|
I'm trying to design two PD controllers to control the roll and pitch angle of my quadcopter and a P controller to control the yaw rate. I give to the system the reference roll, pitch and yaw rate from a smartphone controller (with WiFi).In the case of roll and pitch the feedback for the outer 'P' loop is given by my attitude estimation algorithm, while in the inner 'D' loop there is no reference angle rate, and the feedback is provived by a filtered version of the gyroscope data.
As far the yaw rate is concerned, is only a P controller, the reference yaw rate is given by the smartphone, and the feedback of the only loop is provived by the smartphone. This is to illustrate the situation. My sampling frequency is 100hz (imposed by the attitude estimation algorithm, that is a Kalman Filter, that I'm using). I have tuned my controller gains with matlab, imposing a rise time of 0.1 seconds and a maximum percent overshoot of 2% with root locus. Matlab is able to found me a solution, but with very large gains (like 8000 for P and 100 for D). I was doing the tuning, using a quadcopter model (for each euler angle) based on the linearized model for quadcopter or instance : $$\ddot \tau_\Phi = I_x\ddot \Phi -> G_\Phi(s) = \frac{I_x }{ s^2} $$ only in order to have a 'reasoned' starting point for my gains, and then re-tune it in the reality. (The transfer function above is continous, in my model I have obliviously used the discrete version at 100hz of sampling rate).
This is to do a premise of my following questions.
Now, I have to map my controller outputs to duty cycle. Since I'm using a PWM at 25Khz frequency, my period (in the TIM channel configuration) is of 2879.
I have checked the activation threshold (after which the motor starts move) and the threshold after which it stops increasing its speeds, and the first is 202
and the second is 2389.
I was following the very good answer of Quadcopter PID output but I still have some questions.
1) As far the throttle mapping is concerned, I have to map it in such a way that the values coming from my smartphone controller (in the interval [0, 100]) are not
mapped in the whole [202, 2389] interval, but I have to 'reserve' some speed in order to allow the quadcopter to have an angular movement exploiting differences in the 4 motor speeds even with 100% throttle?
2) Coming back to the fact that matlab propose me huge gains for my controllers, this leads to the fact that I cannot directly sum the controller output to the duty cycle as stated in the metioned answer (because I will certainly go out of the [202, 2389] bound of my TIM pulse). Doing a proportion will result in altering the gains of the systems, so placing somewhere else the poles of my systems and the procedure done with matlab will became useless, right? So, what I'm doing wrong? I have tried to enforce matlab to bound the gainsm for instance in the [0,100] interval, but in this case it cannot find gains such that my constraints are verified.
Thank you
|
Currently using Windows 8, what software packages for artificial intelligence programming (robotics branch) are used in today's professional environment as standard. Lots of internet suggestions, but companies seem to keep this a closely guarded secret. And are the internet rumors true? Would switching to Ubuntu offer me more in terms of depth.
Context: Educational field: Computer Science and Artificial Intelligence, current focus (though obviously experience in others) in programming languages stands at c++, C and Python. Looking to build, program and develop a human-like bot (NOT aiming for singularity at this point ;))and am asking this question in order to build my toolbox a little.
|
I have an application that requires data to be streamed from multiple Bluetooth modules to one host controller. Somewhat like multiple Clients and one Server.
The throughput i am looking at is around 1920-bits per second per module.
The SPBT2632C2A.AT2 module only supports SPP profile in which i can have a single link (One Client One Server). My application needs multiple modules ( Max 5) to send information to one server.
Is there a way to have One Receiving Station and have multiple transmitting module using SPP? (All modules being the SPBT2632C2A), or i need a Different higher end module on the server side which supports multiple SPP Links?
It advisable to look into a module like the BCM2070 and have a driver run system?
|
I know that is a question that has been asked too many times, but still its not clear to me. I read online that it isn't but some people say that they control their robots under ROS in applications with hard real time constraints. So, because I need some technical arguments (rather than a plain "ros is not real time") I will be more specific (suppose we have ROS under a RTOS):
I read that ROS uses a TCP/IP-based communication for ROS topics and I know that TCP/IP is not reliable. That means I cannot use topics in a real time loop? For instance send a control signal to my system publishing it to a topic, and the system sending me some feedback via a topic?
If I have a RTOS (eg Linux+Xenomai) can I build a real time control loop for a robot using ROS, or ROS will be a bottleneck?
Maybe the above are naive or I lack some knowledge, so please enlighten me!
Note: I define as a hard real time system (eg in 1KHz), the system that can guarantee that we will not miss a thing (if the control loop fails to run every 1ms the system fails).
|
EDIT: I realised I missed the point of the paper completely (thanks to very-skim reading ;) ). So, this part of it I'm relating to is about how much damping - not how much stiffness - should we display to obtain stability, given a structural stiffness. I changed the question accordingly - what is achievable stiffness of a impedance/admittance controlled robot, given its structural and control stiffnesses? (Stiffness/compliance is, of course, mathematically just one of the terms in total impedance/admittance)
Let us consider a haptic device with mechanical and control parts, and mechanical part is not infinitely rigid (compliant). Basically, it would be a robot with impedance or admittance control. I thought perceivable stiffness can be just as simple as serial connection of two stiffnesses - and so the stiffer mechanical structure is, the better it can display control stiffness:
$k = \frac{k_e k_c}{k_e + k_c}$
where $k_c$ is stiffness control. Still, I cannot find any confirmation to this, although something very similar is stated in Samur's "Performance Metrics for Haptic Interfaces". I would be very grateful if you could refer me to some sources or just plain prove it wrong or right (:
In a paper (here, p. 728) I only found stability condition for virtual damping value in relation to virtual stiffness, given structural stiffness.
|
I'm working on an robot that would be able to navigate through a maze, avoid obstacles and identify some of the objects in it. I have a monochromatic bitmap of the maze, that is supposed to be used in the robot navigation.
Up till now, I have converted/read the bitmap image of the maze into a 2D array of bits. However, now I need guidance on how to use that array to plan the path for the robot. I would appreciate if you could share any links as well, because I am new to all this stuff (I am just a 1st year BS electrical engineering student) and would be happy to have a more detailed explanation.
If you need me to elaborate on anything kindly say so.
I would be grateful!
Here's the image of the maze.
This is just a sample image; the robot should be able to work with any maze (image) with similar dimensions. And you are welcome!
Thank you Chuck!
UPDATE
Heres the code for sub2ind in c++. Kindly see if the output is correct:-
ofstream subtoind;
subtoind.open("sub2ind.txt");
int sub2ind[96][64] = { 0 };
int ind2subROW[6144] = { 0 };
int ind2subCOL[6144] = { 0 };
int linearIndex=0;
j = 0;
z = 0;
for (j = 1; j <= 64; j++)
{
for (z = 1; z <= 96; z++)
{
linearIndex = z + (j - 1) * 96;
sub2ind[z-1][j-1] = linearIndex-1;
//ind2subROW[linearIndex-1] = j-1;
//ind2subCOL[linearIndex-1] = z-1;
}
}
for (j = 0; j < 96; j++)
{
subtoind << endl; //correction
cout << endl;
for (z = 0; z < 64; z++)
{
subtoind << sub2ind[j][z] << " ";
}
}
subtoind.close();
Heres the Link to the output file.
https://drive.google.com/file/d/0BwUKS98DxycUSk5Fbnk1dDJnQ00/view?usp=sharing
|
I'm using the control system toolbox provided by matlab to estimate the gains of my controller: using root locus design I get a graph like this one .
My question is: what is the x on the x-axis? maybe a pole position at a previous iteration of the optimization procedure that I have run to find a gain value that satisfies my requirements? It shouldn't be the open loop pole position, because my system is formed by two integrators multiplied by a constant (1/inertia). Thanks
Edit: I add the requested details: I start from the following simulink diagram:
my trasfer function is $$G_\Theta(s) = \frac{Y(s)}{U(s)} = \frac{\Theta(s)}{\tau_\Theta} = \frac{1}{I_y s^2}$$ with
Iy = 0.0054 (another little question, the point in which I'm taking out the torque is correct?)
and then I select analysis , control design, compensator design. I select Kp and Kd as the gains to be tuned, and I use the root locus for specifying the constraints. Then I click in SISO Design Task, automated tuning, optimize compensator, which automatically tries to find gain values to satisfy my constraints. The white are is the area that satisfies the constraints, and I think that the pink squares are my poles position after having been completed the optimization procedure. This is correct? But in this case, what is the x(pole) shown? Thanks
|
my question is: in a lot of cases it is possible to find in Internet PD (instead PID) to control the euler angles of quadcopter? Why the integral part is often neglected in this kind of applications? thanks
|
I've seen that it is possible to use some micro controller to send commands to the Roomba through the SCI but I was more interested in changing the behavior of the roomba operation (e.g: change the priority of the behaviours)
Is there some IDE for roomba?
|
Let me know if this should be on Academia instead, but I posted it here to get responses specifically from people active in robotics development.
I'm currently an undergraduate student completing majors in both mechanical engineering and computer science. I'm still fairly new to the field, but my interest is firmly in electronic and mechanical systems. Next year I can take one of the courses below:
1. Multivariable Calc.
2. Linear Algebra
3. Differential Equations
I want to take all three and likely will eventually, but for the time being my schedule only allows for one. Therefore, I was wondering if you could explain a little bit about how each is applied to the robotics field and which you believe will be most helpful for me to learn now.Thanks in advance!
|
There's an accelerometer in the IMU. The output can then be integrated to estimate the position, at least in theory.
But in practice, there's a huge acceleration from gravity, which varies rather randomly across locations. Vibrations etc can be filtered out with low-pass filters, but how do you filter out gravity? Is it simply the case that the vertical vector is ignored when doing any calculations?
My application is, I want to build a quadcopter that could hover in one place even in the presence of (reasonable) winds: the quadcopter ideally would tilt towards random gusts to maintain a certain position. Every single tutorial I could find on the Internet only uses the accelerometer to estimate where down is when stationary, and simply assumes that using the gyroscope to hold the quadcopter level is enough.
I also want to use the IMU to estimate altitude if possible, of course as an input to something like a Kalman filter in conjunction with a sonar system.
Obviously, for my application GPS is far too slow.
|
Hi I want to implement an human arm robot and a task such as moving a glass between two points using Robotic Toolbox for Matlab by Peter Coorke. I'm a student and I'm a newbie in this kind of things so I would find a good reference for solving the inverse kinematics of the human arm and an algorithm that implements some kind of obstacle avoidance exploiting the redundancy of the manipulator (7dof) using null space motion. Anyone can suggest me a good reference to follow in this implementation with the toolbox? Thanks
|
I couldn't find a sub stackexchange for artificial intelligence, but I think robotics comes close, and so I'm posting here.
I recently saw TED talks on AI and the Google car, with these being the most interesting to me:
Hod Lipson - Building "self-aware" robots
Juan Enriquez - The next species of human
Ray Kurzweil - Get ready for hybrid thinking
The third one led me to the 'criticism' section (labeled Analysis on that wiki article, though it certainly at least partially reads as a criticism section as well) of Kurzweil 'theory' of the brain, namely "Pattern Recognition Theory of Mind" (PRTM). After some link surfing on the people who have performed analysis of PRTM and their respective academic contributions, I came to learn about Cognitive Architecture:
"A cognitive architecture can refer to a theory about the structure of
the human mind. One of the main goals of a cognitive architecture is
to summarize the various results of cognitive psychology in a
comprehensive computer model. However, the results need to be in a
formalized form so far that they can be the basis of a computer
program. By combining the individual results are so for a
comprehensive theory of cognition and the other a commercially usable
model arise. Successful cognitive architectures include ACT-R
(Adaptive Control of Thought, ACT), SOAR and OpenCog."
It appears that there are several interesting architectures, including the 3 mentioned above. I read a bit about ACT-R, SOAR, OpenCog, DUAL, CHREST, and CLARION. The list is not comprehensive. It also appears that there are two main types of such architectures: Connectionism and Symbolic.
Though I have many questions, my main question is this:
What are some quantitative metrics and qualitative properties to measure and compare between the two architecture types?
Other questions
Can all architectures be categorized as one, the other, or some
combination of the two, or is there a third, fourth, etc?
How are two main types alike? How are they different?
What are some recommended further readings on this topic.
What centres and organizations are leading development in this?
What are some of the computer programming languages, related skill-sets, and
cross-domain knowledge set utilized in R&D and product offerings of
such systems?
|
I'm looking for an equation (or set of equations) that would allow me to predict (with fair accuracy) how heavy a payload a quadcopter is capable of lifting.
I assume the main variables would be the weight of the copter as well as the size + power of the 4 rotors. What general approach can one use to make such a determination?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.