instruction
stringlengths 40
28.9k
|
---|
I'm following this getting started tutorial, connected the board to USB and it's detected as mass storage, got the driver installed (Win 64), and at the third step I wasn't able to connect to BeagleBone webserver at 192.168.7.2, anything I did wrong? Please help.
Here is some troubleshooting info from Getting Started page, I've followed them all. I'm using Chrome, tried the node-webkit based application, not in a Virtual Machine, not using SSH just trying to access the webserver.
Troubleshooting
Do not use Internet Explorer.
One option to browse your board is to use this node-webkit based
application (currently limited to Windows machines):
beaglebone-getting-started.zip.
Virtual machines are not recommended when using the direct USB
connection. It is recommended you use only network connections to your
board if you are using a virtual machine.
When using 'ssh' with the provided image, the username is 'root' and
the password is blank.
Visit beagleboard.org/support for additional debugging tips.
UPDATE:
- I tried to install Ubuntu on my machine and connect the BeagleBone, it need not any driver and I can immediately access the webserver after ejecting the mass storage, enabling the 'USB-to-Ethernet Interface'. However in Windows ejecting the mass storage still does nothing. Still trying to make it connect in Windows.
|
I plan to use P8.13 and P8.15 of the beaglebone in a i2c bitbang mode.
Do i need to use external pull up resistors in my circuit? or can i use the internal pull up which is available on the beaglebone black itself?
|
I'm looking to build a new (first) quadcopter without the conventional flight controller and radio, with an onboard RPi and applying some newfound knowledge on autonomous control to improve my coding skills.
Although, since I've never actually built a quadcopter, I don't actually have any experience in using brushless motors.
I'll be using a RPi B+, so controlling them over I2C was something I looked into. The B+ though only has two I2C interfaces. It also only has two hardware PWM pins and I'm unsure whether software PWM would be enough. I found the Afro SimonK-based ESCs from HobbyKing which have I2C (Intended for the MikroKopter).
I've looked around and people have used the Adafruit 16-channel PWM/Servo drivers to control them. Is this an option to look into? Or is there perhaps a better way?
Also, would be it particularly safe if the RPi is run off the ESC's BEC? It's confusing because, when the ESC is powered on, well, it'll be powered on before the RPi comes up. What do ESCs do when they have bad input?
|
I have a task of developing a simulation of an adaptive robot control system but I don't seem too have anyone to discuss my uncertainties with. I want to keep the simulation as simple as possible as I have a very tight deadline and it's only a one off project that most probably will never be used in my life again.
The minimal behaviour that the agent is supposed to exhibit is wall and obstacle avoidance. It can be extended to avoiding small objects and exploring large ones.
I've decided to go with a simple feedback control system.
To begin with I'm struggling to decide how to represent the map of agent's environment. What I mean is, what if I want a wall to be from coordinate [0,0] to [0.5]. I could hard code it, e.g. have a matrix with coordinates of all obstacles but how small units do I make... I.e. what if I have two neighbouring coordinates [0,0.01] and [0,0.02] but the agents gets a 'clear to go' to coordinate [0,0.05]. In this case it doesn't know that it actually is about to walk into a wall. I've heard of something called occupancy grid map but I don't exactly get how it works and how to implement it.
Another thing that I am struggling with is how do I distinct between a wall and an obstacle? And then, how do I let the agent know how big that obstacle is so that it can either avoid it or explore it.
Eh, I'm really puzzled with this project.
I would really appreciate any thoughts or directions. Thank you. :-)
|
I have a 4-DOF robot arm system with 4 revolute joints arranged in an open-chain fashion like below:
Assume that each link’s mass is a point mass located at p_i and each link’s center of mass is at p_i.
What I am trying to do is calculate the center of mass Jacobian matrix of the arm.
I found some related materials online Center of Mass Jacobian.But I am still not very sure about how to calculate it. Could anybody give me some hint? Thanks!
|
Been working on a robot recently which uses ultrasonic sensors for an integral part of the navigation.
While testing the sensors I noticed a strange behaviour, the sensors seem to frequently stop functioning and bring the entire Arduino Mega I'm working with to a stop. The strange part is that these stops seem to be entirely random, on some occasions the sensor will read values consistently (at maybe 20 vals per second) for 10+ seconds, then all of a sudden the sensor will slow to reading only 2-3 values per second with stalls between.
I have tested several sensors and different codes for pinging distances yet the problem has persisted.
This leads me to believe the issue is with the arduino mega itself, but I am unsure how to verify this. Any advice?
Thanks in advance!
PS: other pins on the Mega seem to be working fine, i.e. analog pins for IR reflectance sensors and PWM pins for driving 2 DC motors.
|
I want to build a quad copter. I want to know how do we calculate thrust or lift generated by using a motor, I am not aware about the capacities of motor. So can you explain how to calculate thrust or lift generated by assuming a motor. And what is the maximum payload that a quad copter can lift for a given thrust.
|
Im looking for a good source for robotic components like sheel/tracked robot chasis, motors, sensors, communication and mechanics. I thought about using raspberry and arduino as platforms for automation, is that an good idea? Im asking as i dont know yet much about the motors/drives uses for powering robots.
Thanks!
Uli
|
I have an Arduino Mega board and the Adafruit Ultimate GPS Logger + GPS module shield. I have these two connected together using headers and have the entire thing mounted on my drone. Currently, I have a code that I found online and modified slightly to get GPS coordinates in NMEA format and parse them for the information I actually want. I can store these in an SD card.
The thing is, I want to use the Arduino GSM shield to somehow send this data, either from the SD card or directly, to a folder in Dropbox. I have no idea how to do that, if it's possible at all. I just started working with Arduino about a month ago, so I apologize if my question sounds particularly noob-ish.
Could anyone on this forum at least guide me on how to approach this problem? Thanks!
|
I'm trying to do graph optimization with G2O, mainly in order to perform loop closure. However finding minimal working examples online is an issue (I've found this project, as well as this one. The second one though has the form of a library, so one cannot really see how the author uses things.)
In contrast to online loop closure, where people update and optimize a graph every time they detect a loop, I'm doing graph optimization only once, after pairwise incremental registration. So in my case, pairwise registration and global, graph-based optimization are two separate stages, where the result of the first is the input for the second.
I already have a working solution, but the way that works for me is quite different from the usual use of g2o:
As nodes I have identity matrices (i.e. I consider that my pointclouds are already transformed with the poses of the pairwise reg. step) and
as edges, I use the relative transformation based on the keypoints of
the pointclouds (also the keypoints are transformed). So in this case
I penalize deviations of the relative pose from the identity matrix.
As Information matrix (inverse of covariance) I simply use a 6x6
identity matrix multiplied by the number of found correspondences
(like this case).
The result of the graph is an update matrix,
i.e. I have to multiply with this the camera poses.
Although this works in many/most cases, it is a quite unusual approach, while one cannot draw the graph for debugging (all nodes are identities in the beginning, and the result after optimization is a 3d path), which means that if something goes wrong getting an intuition about this is not always easy.
So I'm trying to follow the classic approach:
The vertices/nodes are the poses of the pairwise registration
The edges are the relative transformations based on the keypoints/features of the raw pointclouds (i.e. in the camera frame, not transformed by the poses of the pairwise registration)
The output are the new poses, i.e. one simply replaces the old poses with the new ones
Drawing the graph in this case makes sense. For example in case of scanning an object with a turntable, the camera poses form a circle in 3d space.
I'm trying to form all the edges and then optimize only at one stage (this doesn't mean only 1 LM iteration though).
However I cannot make things running nicely with the 2nd approach.
I've experimented a lot with the direction of the edges and the relative transformation that is used as measurement in the edges, everything looks as expected, but still no luck. For simplicity I still use the information matrix as mentioned above, it is a 6x6 identity matrix multiplied with the number of correspondences. In theory the information matrix is the inverse of covariance, but I don't really do this for simplicity (plus, following this way to compute the covariance is not very easy).
Are there any minimal working examples that I'm not aware of?
Is there something fundamentally wrong in what I describe above?
Are any rules of thumb (e.g. the first node in both approaches above is fixed) that I should follow and I might not be aware of them?
Update: More specific questions
The nodes hold the poses of the robot/camera. It is unclear though at which reference frame they are defined. If it is the world coordinate frame, is it defined according to the camera or according to the object, i.e. first acquired pointcloud? This would affect the accumulation of the pose matrices during incremental registration (before the g2o stage - I try to form and optimize the graph only once at the end, for all the frames/pointclouds).
The edge (Src->Tgt) constraints hold the relative transformation from pointcloudSrc to pointcloudTgt. Is it just the transformation based on the features of the two in the local coordinate frame of pointcloudSrc? Is there and tricky point regarding the direction, or just consistency with the relative transformation is enough?
The first node is always fixed. Does the fixed node affect the direction of the edge that departs/ends_up from/at the fixed node?
Is there any other tricky point that could hinter implementation?
I'm working in millimeter instead of meter units, I'm not sure if this will affect the solvers of g2o in any way. (I wouldn't expect so, but a naive use of g2o that was giving some usable results was influenced)
|
I need the specifications for the Create 2. I need it for research purposes. So I think I'm going to need a high computational computer on board.
Please suggest some nice configuration.
|
We bought a new Create2 robot and started using it. But when we issue the dock command the robot moves for a bit and does not go back to the base.
The base is not hidden or obstructed and the create2 is just a couple of feet away. we need help to figure out why it does not see the base.
Just to clarify that even using the DOCK button on the create2 does not make the create2 go back to the base
|
my name is dylan we are doing a project on irobot create and we would like to know the specifications graph showing the battery discharges in volt per time and my robot is the irobot ceate 1.
The battery is the roomba advanced power , it's a 14.4V Nickel metal hybride battery pack and she deliver 3000mah.
|
I'm trying to implement the tracking problem for this example using PID controller. The dynamic equation is
$$
I \ddot{\theta} + d \dot{\theta} + mgL \sin(\theta) = u
$$
where
$\theta$ : joint variable.
$u$ : joint torque
$m$ : mass.
$L$ : distance between centre mass and joint.
$d$ : viscous friction coefficient
$I$ : inertia seen at the rotation axis.
$\textbf{Regulation Problem:}$
In this problem, the desired angle $\theta_{d}$ is constant and $\theta(t)$ $\rightarrow \theta_{d}$ and $\dot{\theta}(t)$ $\rightarrow 0$ as $t$ $\rightarrow \infty$. For PID controller, the input $u$ is determined as follows
$$
u = K_{p} (\theta_{d} - \theta(t)) + K_{d}( \underbrace{0}_{\dot{\theta}_{d}} - \dot{\theta}(t) ) + \int^{t}_{0} (\theta_{d} - \theta(\tau)) d\tau
$$
The result is
and this is my code main.m
clear all
clc
global error;
error = 0;
t = 0:0.1:5;
x0 = [0; 0];
[t, x] = ode45('ODESolver', t, x0);
e = x(:,1) - (pi/2); % Error theta
plot(t, e, 'r', 'LineWidth', 2);
title('Regulation Problem','Interpreter','LaTex');
xlabel('time (sec)');
ylabel('$\theta_{d} - \theta(t)$', 'Interpreter','LaTex');
grid on
and ODESolver.m is
function dx = ODESolver(t, x)
global error; % for PID controller
dx = zeros(2,1);
%Parameters:
m = 0.5; % mass (Kg)
d = 0.0023e-6; % viscous friction coefficient
L = 1; % arm length (m)
I = 1/3*m*L^2; % inertia seen at the rotation axis. (Kg.m^2)
g = 9.81; % acceleration due to gravity m/s^2
% PID tuning
Kp = 5;
Kd = 1.9;
Ki = 0.02;
% u: joint torque
u = Kp*(pi/2 - x(1)) + Kd*(-x(2)) + Ki*error;
error = error + (pi/2 - x(1));
dx(1) = x(2);
dx(2) = 1/I*(u - d*x(2) - m*g*L*sin(x(1)));
end
$\textbf{Tracking Problem:}$
Now I would like to implement the tracking problem in which the desired angle $\theta_{d}$ is not constant (i.e. $\theta_{d}(t)$); therefore, $\theta(t)$ $\rightarrow \theta_{d}(t)$ and $\dot{\theta}(t)$ $\rightarrow \dot{\theta}_{d}(t)$ as $t$ $\rightarrow \infty$. The input is
$$
u = K_{p} (\theta_{d} - \theta(t)) + K_{d}( \dot{\theta}_{d}(t) - \dot{\theta}(t) ) + \int^{t}_{0} (\theta_{d}(t) - \theta(\tau)) d\tau
$$
Now I have two problems namely to compute $\dot{\theta}_{d}(t)$ sufficiently and how to read from txt file since the step size of ode45 is not fixed. For the first problem, if I use the naive approach which is
$$
\dot{f}(x) = \frac{f(x+h)-f(x)}{h}
$$
the error is getting bigger if the step size is not small enough. The second problem is that the desired trajectory is stored in txt file which means I have to read the data with fixed step size but I'v read about ode45 which its step size is not fixed. Any suggestions!
Edit:
For tracking problem, this is my code
main.m
clear all
clc
global error theta_d dt;
error = 0;
theta_d = load('trajectory.txt');
i = 1;
t(i) = 0;
dt = 0.1;
numel(theta_d)
while ( i < numel(theta_d) )
i = i + 1;
t(i) = t(i-1) + dt;
end
x0 = [0; 0];
options= odeset('Reltol',dt,'Stats','on');
[t, x] = ode45(@ODESolver, t, x0, options);
e = x(:,1) - theta_d; % Error theta
plot(t, x(:,2), 'r', 'LineWidth', 2);
title('Tracking Problem','Interpreter','LaTex');
xlabel('time (sec)');
ylabel('$\dot{\theta}(t)$', 'Interpreter','LaTex');
grid on
ODESolver.m
function dx = ODESolver(t, x)
persistent i theta_dPrev
if isempty(i)
i = 1;
theta_dPrev = 0;
end
global error theta_d dt ;
dx = zeros(2,1);
%Parameters:
m = 0.5; % mass (Kg)
d = 0.0023e-6; % viscous friction coefficient
L = 1; % arm length (m)
I = 1/3*m*L^2; % inertia seen at the rotation axis. (Kg.m^2)
g = 9.81; % acceleration due to gravity m/s^2
% PID tuning
Kp = 35.5;
Kd = 12.9;
Ki = 1.5;
if ( i == 49 )
i = 48;
end
% theta_d first derivative
theta_dDot = ( theta_d(i) - theta_dPrev ) / dt;
theta_dPrev = theta_d(i);
% u: joint torque
u = Kp*(theta_d(i) - x(1)) + Kd*( theta_dDot - x(2)) + Ki*error;
error = error + (theta_dDot - x(1));
dx(1) = x(2);
dx(2) = 1/I*(u - d*x(2) - m*g*L*sin(x(1)));
i = i + 1;
end
trajectory's code is
clear all
clc
a = 0:0.1:(3*pi)/2;
file = fopen('trajectory.txt','w');
for i = 1:length(a)
fprintf(file,'%4f \n',a(i));
end
fclose(file);
The result of the velocity is
Is this correct approach to solve the tracking problem?
|
I want to measure the real time RPM of the wheels. I think incremental rotary encoder would be good. But i am confused on how to interface it with DC brushless geared motors. From the images i am not quite sure if only one rotary encoder would suffice or do i need any other sensor also with it?
I am doing my project on arduino uno.
|
I want to build a cheap robot programmable in Scratch graphical language, that could be employed during lessons in school. Scratch code is interpreted on a PC, so on the robot there should be only the code that receives specific commands (i.e. drive forward) and transmits sensors' measurements.
I'm looking for a wireless technology that will allow me to exchange information between robot and PC with at least 30Hz rate. It should also allow to work at least 16 robots simultaneously in the same room and have a range of at least 20m.
I did tests with BLuetooth, but sometimes there are connectivity issues, and pairing devices can be a hassle in a classroom. I have also tried WiFi modules, but pinging it showed average time of 19ms, but maximum of more than 500ms, so I'm afraid that it won't be able to control linefollower robot for example.
Can you point me to some other, preferably cheap (under 10$ per module) wireless technologies? Or maybe my worries about WiFi are exaggerated?
|
I am trying to work with the create2. In using the "get distance traveled" command (id 142) I am getting back incorrect data. My simple test case logic is
I am working with the Create2_TetheredDrive.py example
and adding this
elif k == 'PLUS' or k == 'MINUS': # Move 200mm forward or backward
# reset distance measurement by sending request
sendCommandASCII('142 19')
# ignore/discard the data returned
recv_basic(connection)
# set velocity mm/s
v = 200;
if k=='MINUS':
v=-v
# start moving
cmd = struct.pack(">Bhh", 145, v, v)
sendCommandRaw(cmd)
# pause 1 second
time.sleep(1);
# stop moving
cmd = struct.pack(">Bhh", 145, 0, 0)
sendCommandRaw(cmd)
# get distance traveled
sendCommandASCII('142 19')
data = recv_basic(connection)
dist = struct.unpack('>h',data)
print(dist)
I consistently numbers near -25 for moving forward, and +25 for moving backward.
If I wait for 2 seconds, I get -50 for moving forward, and +50 for moving backward. The documentation says it should return the distance traveled in mm, so these numbers seem to be off by a factor of -8.
Anyone have any suggestions? Thanks.
p.s. I had to add this function to the example as well
def recv_basic(the_socket):
the_socket.settimeout(0.1)
total_data=[]
while True:
try:
data = the_socket.recv(8192)
total_data.append(data)
except:
break
return ''.join(total_data)
|
I teach a university sophomore level MATLAB programming class for engineers, and I am planning on using the create2 for their final project. There is a nice simulator and MATLAB toolbox for the Create, but the toolbox utilizes some of the commands that no longer exist on the Create 2, thus it doesn't work correctly. And of course is doesn't support any of the newer commands. In addition, I want to be able to "cut the cord" so I am using a Raspberry Pi on the Create to pipe data to the serial port, and TCPIP sockets to send the data from a remote computer running MATLAB to the Pi/Create. If anyone is working on a similar configuration, I'd love to trade notes and share the pain.
|
Are there some genaral rules for the robustness between monocular and stereo vision when considering object detection? I am especially interested in the automotive field - considering distance/obstacle/car detection (see video links below).
Someone told me monocular vision is more robust than stereo. I guess this may be true if the monocular algorithm is well written (and especially verified over lots of input data)... but once you input (image) data that has not been verified it may probably provide unexpected results, right? With stereo vision one does not really care about the contents of the image as long as texture/lighting conditions allow stereo matching and the object detection is then done within the point cloud.
I consider following usage:
Monocular
Stereo
The monocular sample video seems to have sometimes problems detecting the cars in front (the bounding boxes disappear once in a while). The stereo sample seems to be more robust - the car in front clearly is detected in all of sequnce image frames.
|
I'm writing some Quad Copter software and beginning to implement an altitude hold mode.
To enable me to do this I need to get an accurate reading for vertical velocity. I plan to use a Kalman filter for this but first I need to ensure that I'm getting the correct velocity from each individual sensor.
I have done this but I'm not 100% sure its correct so I was hoping to get some confirmation on here.
My first sensor is a Lidar distance sensor, I calculated acceleration and velocity using the following code:
float LidarLitePwm::getDisplacement()
{
int currentAltitude = read();
float displacement = currentAltitude - _oldAltitude;
_oldAltitude = currentAltitude;
return displacement; //cm
}
//Time since last update
float time = (1.0 / ((float)FLIGHT_CONTROLLER_FREQUENCY / 10.00)); // 50Hz, 0.02s
float lidarDisplacement = _lidar->getDisplacement();
_currentLidarVelocity = lidarDisplacement / time;
The second sensor is an accelerometer. I calculated acceleration and velocity using the following code:
Imu::Acceleration Imu::getAcceleration()
{
//Get quaternion
float q[4];
_freeImu.getQ(q);
//Get raw data
float values[9];
_freeImu.getValues(values);
//Extract accelerometer data
float acc[3];
acc[0]= values[0]; //x
acc[1]= values[1]; //y
acc[2]= values[2]; //z
//Gravity compensate
_freeImu.gravityCompensateAcc(acc, q);
//Convert acceleration from G to cm/s/s
_acceleration.x = acc[0] * 9.8 * 100;
_acceleration.y = acc[1] * 9.8 * 100;
_acceleration.z = acc[1] * 9.8 * 100;
return _acceleration; //cm/s/s
}
//Time since last update
float time = (1.0 / ((float)FLIGHT_CONTROLLER_FREQUENCY / 10.00)); // 50Hz, 0.02s
//Get accel
Imu::Acceleration imuAcceleration = _imu->getAcceleration();
//Get velocity
currentZVelocity += imuAcceleration.z * time; //cm/s
It would be great if someone could confirm if this is correct (or not)
Thanks
Joe
|
I'm trying to select a brushed DC motor for a project. I tried following the advice on sizing electric motors, mentioned in this question, but a few details were missing, and I'm unsure if I properly followed the procedure.
For my application, I need:
Nm = number of motors = 2
Wd = wheel diameter = 12 cm
Wp = estimated weight of platform = 5 kg
Minc = maximum incline under load = 5 degrees
Vmax = maximum velocity under load = 5 km/hr
Fpush = maximum pushing force = 1.25 kg
Ur = coefficient of rolling friction = 0.015
These are my calculations:
Step 1: Determine total applied force at worst case.
Ftotal = Wp * (Ur*cos(Minc) + sin(Minc)) + Fpush = 1.7604933161 kilogram
Step 2: Calculate power requirement.
Vradps = maximum velocity under load in radians/second = 23.1481481481 radian / second
Pmotor = required power per motor = (Ftotal * Vradps * Wd/2)/Nm = 1.22256480284 kilogram * meter * radian / second
Step 3: Calculate torque and speed requirement.
Tmotor = required torque per motor = Pmotor/Vradps = 5281.47994829 centimeter * gram = 73.345953832 inch * ounce
RPMmin = required revolutions per minute per motor = Vradps / 0.104719755 = 221.048532325 rev / minute
Are my calculations correct? Intuitively, the final Tmotor and RPMmin values seem right, but my calculation for Pmotor doesn't exactly match the one used in the link, which doesn't explicitly do the conversion to radians / second and therefore doesn't result in the proper units.
Here's my Python script for reproducing the above calculations:
from math import *
#http://pint.readthedocs.org/en/0.6/tutorial.html
from pint import UnitRegistry
ureg = UnitRegistry()
def velocity_to_rpm(v, r):
kph = v.to(kilometer/hour)
r = r.to(kilometer)
d = r*2
rpm = (kph / (2*pi*r)) * ((1*hour)/(60.*minute)) * rev
return rpm
def velocity_to_radps(v, r):
return velocity_to_rpm(v, r).to(radian/second)
# Units
km = kilometer = ureg.kilometer
meter = ureg.meter
newton = ureg.newton
cm = centimeter = ureg.centimeter
hr = hour = ureg.hour
mm = millimeter = ureg.millimeter
rev = revolution = ureg.revolution
minute = ureg.minute
sec = second = ureg.second
kg = kilogram = ureg.kilogram
gm = gram = ureg.gram
deg = degree = ureg.degree
rad = radian = ureg.radian
oz = ureg.oz
inch = ureg.inch
# Conversions.
km_per_mm = (1*km)/(1000000.*mm)
hour_per_minute = (1*hour)/(60.*minute)
minute_per_second = (1*minute)/(60*sec)
minute_per_hour = 1/hour_per_minute
gm_per_kg = (1000*gm)/(1*kg)
cm_per_km = (100000*cm)/(1*km)
# Constraints
target_km_per_hour = (5*km)/(1*hour) # average walking speed
estimated_platform_weight = 5*kg
maximum_incline_degrees = 5*deg
maximum_incline_radians = maximum_incline_degrees * ((pi*rad)/(180*deg))
maximum_pushing_force = estimated_platform_weight/4.
maximum_velocity_at_worst_case = (5*km)/(1*hour)
rolling_friction = 0.015 # rubber on pavement
velocity_under_max_load = target_km_per_hour
number_of_powered_motors = 2
# Variables
wheel_diameter_mm = 120*mm
wheel_radius_mm = wheel_diameter_mm/2
wheel_radius_km = wheel_radius_mm * km_per_mm
rev_per_minute_at_6v_unloaded = 33*rev/(1*minute)
rev_per_minute_at_6v_loaded = rev_per_minute_at_6v_unloaded/2.
mm_per_rev = (wheel_diameter_mm * pi)/(1*rev)
target_rpm = velocity_to_rpm(target_km_per_hour, wheel_radius_mm)
target_radps = velocity_to_radps(target_km_per_hour, wheel_radius_mm)
# Calculate total applied force at worst case.
total_applied_force_worst_case = estimated_platform_weight * (rolling_friction*cos(maximum_incline_radians) + sin(maximum_incline_radians)) + maximum_pushing_force
print 'Ftotal:',total_applied_force_worst_case
# Calculate power requirement.
vel_in_radps = velocity_to_radps(velocity_under_max_load, wheel_radius_mm)
print 'Vradps:',vel_in_radps
required_power = total_applied_force_worst_case * velocity_to_radps(velocity_under_max_load, wheel_radius_mm) * wheel_radius_mm.to(meter)
required_power_per_motor = required_power/number_of_powered_motors
print 'Pmotor:',required_power_per_motor
# Calculate torque and speed requirement.
required_angular_velocity = velocity_under_max_load/wheel_radius_km * hour_per_minute * minute_per_second * rad #rad/sec
required_rpm = required_angular_velocity / 0.104719755 * (rev/rad) * (sec/minute)
required_torque_per_motor = (required_power_per_motor/required_angular_velocity).to(gm*cm)
print 'Tmotor: %s, %s' % (required_torque_per_motor, required_torque_per_motor.to(oz*inch))
print 'PRMmin:',required_rpm
|
currently im working on a RGB-D SLAM with a Kinect v1 Camera. In the front-end the SLAM estimates the pose with Ransac as an initial guess for the ICP. With the pose estimation i transform the pointcloud to a pointcloud-scene which represents my map.
To smooth the map im trying to implement a graph optimizing algorithm (g2o).
Until now, there is no graph representation in my frontend, so i started to integrate that.
Im trying to build a .g2o file with the following fromat:
VERTEX_SE3 i x y z qx qy qz qw
where x, y, z is the translation and qx, qy, qz, qw ist the Rotation in respect to the initial coordinate system. And,
EDGE_SE3 observed_vertex_id observing_vertex_id x y z qx, qy, qz, qw inf_11 inf_12 .. inf_16 inf_22 .. inf_66
Translation and rotation for the edge is the pose estimate that i compute with Ransac and ICP (visual odometry).
Now im getting stuck with the information matrix.
I read the chapter 3.4 THE INFORMATION FILTER in Thrun's Probabolistic Robotics and several threads in this forum, such as:
The relationship between point cloud maps and graph maps
and
information filter instead of kalman filter approach
From the second link, i got this here.
The covariance update
$$P_{+} = (I-KH)P$$
can be expanded by the definition of K to be
$$ P_{+} = P - KHP$$
$$ P_{+} = P - PH^T (HPH^T+R)^{-1} HP$$
Now apply the matrix inversion lemma, and we have:
$$P_{+} = P - PH^T (HPH^T+R)^{-1} HP$$
$$ P_{+} = (P^{-1} + H^TR^{-1}H)^{-1}$$
Which implies:
$$ P_{+}^{-1} = P^{-1} + H^TR^{-1}H$$
The term $P^{-1}$ is called the prior information,$$H^TR^{-1}H$$
is the sensor information (inverse of sensor variance), and this gives us $P^{-1}_+$, which is the posterior information.
Could you please point this out for me.
What data do i need to compute the information matrix?
|
I'm confused about how to compute the error in orientation. All the documents I've read don't explain how to do it.
The error in position is simply the difference between the points.
Let's assume we have the orientation along the effector axis, and we represent the rotation with quaternions. I have two questions:
Is describing the orientation with quaternions a good approach?
How can we compute the error in orientation with the quaternions to use this in jacobian transpose?
|
I'm currently working on my first robotics project using the Initio kit from 4tronix powered by Raspberry Pi. The setup was fairly simple, and I've been testing it out over the last couple of days. All of the sensors work as expected; however, my motor tests are failing. When I input commands to actually move the robot, I can hear the DC motors running but they're not getting enough power to do anything. In the instructions, it says if this issue is encountered, that the power selection jumper might not be set correctly and provides this diagram:
For comparison, here's how I have the wiring for the motors setup:
I'm not entirely sure what it means to have the power selection jumper being set incorrectly and would greatly appreciate it if someone could explain this to me or point out if they see anything wrong with my setup.
|
I've recently been learning about SLAM and have been attempting to implement EKF-SLAM in python. I've been using this great article as a guide. Some progress has been made, but I'm still confused by certain stages.
Firstly, does the inverse sensor model have to compute range and bearing, as opposed to cartesian coordinates? Why is this approach used?
Secondly, what format should my robot provide its heading in? Currently I just use a running offset from the origin angle (0), without wrapping it between 0 and 360. Turning right yields positive degrees, and left negative. I ask this as I assume the sensor model expects a certain format.
Thirdly, when computing the jacobians for adding new landmarks, (page 35) is Jz simply the absolute rotation of the robot (-540 degrees for example) plus the bearing the landmark was detected at?
And finally, what's the best approach for managing the huge covariance matrix? I'm currently thinking of a good way to 'expand' P when adding new landmarks.
Here's my current implementation: http://pastebin.com/r7wUMgY7
Any help would be much appreciated! Thanks.
|
i would like to build quad that uses bigger propellers like 15" . My question is what kind of motor shall i use ? Low or high KV? Do all motors support this kind of propellers ? Will they burn because of it ?I found they say CW and CCW motors does that mean you can't set way they spin ? I'm totally new in this so thank you for answer .
okey so given this one it should be able to hold 15" prob since it's in description
shall i get 12A ESC since on 15 size prob they used max 8.8 or shall i get 25A ESC cause max continous is 20 ?
|
I've recently been learning about SLAM and EKF-SLAM.
I've began my implementation in python, but have had trouble managing the updating of P, especially when it comes to adding new landmarks. Currently there is no 'P' but just a few separate matrices that I have to stitch together when needed.
My implementation can be seen here: http://pastebin.com/r7wUMgY7
How best should I manage the large covariance matrix, should I be using one matrix, like the algorithm suggests? Thanks in advance.
|
I've been working through this informative guide on EKF-SLAM but I'm having difficulty understanding the jacobians required for the 'landmark update', on page 35.
What exactly is Jxr and Jz taking as input? Is it taking the the current rotation of the robot, plus the addition of the odometry update? IE, the rotation that is now stored in the 'X' state vector. Or are they taking the angle from the Inverse Sensor Model, and if so, what's the 'delta' angle from?
Thanks.
|
Joint θi di ai-1 αi-1
1 θ1-90 -d1 0 180
2 θ2 0 0 -90
3 θ3 0 a2 0
4 θ4-90 0 a3 0
5 θ5 0 0 90
I am confused about the right way to look for my theta1-theta5.
Probably from the offset limit of the angles or calculation from x0 to x5 angle rotation or from atan2(x,y).
|
What is the reduced form of this block diagram? I can't see any solution way :(
|
I draw a robotic arm in Solidworks, but I'm not so sure about how to find out the DOF, forward and backward kinematic.
Could anyone help me understand how to work out the kinematic solution of this robot arm?
|
I'm working on the control of a quadcopter and I'd like to understand how come controlling the yaw does not increase the overall thrust. My understanding is that the control is carried out through 2 PIDs per axis (roll, pitch and yaw). The output of the last PID is sent as a PWM signal to correct the rotor speeds of the propellers. The mixing looks something like that:
$T_{FrontLeft} = thrust + roll_{pid} + pitch_{pid} + yaw_{pid}$
$T_{FrontRight} = thrust - roll_{pid} + pitch_{pid} - yaw_{pid}$
$T_{RearLeft} = thrust + roll_{pid} - pitch_{pid} - yaw_{pid}$
$T_{RearRight} = thrust - roll_{pid} - pitch_{pid} + yaw_{pid}$
All the quadcopter controls seem to work that way from what I could gather. So the basic idea to control yaw is to add $yaw_{pid}$ to the clockwise motors and substract the same amount $yaw_{pid}$ to the counterclockwise motors to make the quadcopter turns clockwise. Which translates into a increase of speed of clockwise motors and a decrease of speed for counterclockwise motors from the same amount.
But we know that each motor produces thrust and torque according to those equations:
$T = C_T\rho n^2 D^4$
$Q = C_Q\rho n^2 D^5$
where $T$ is thrust, $Q$ is torque, $C_T$ and $C_Q$ are system dependent constants, $ρ$ is the air density, $n$ is rotor speed, and $D$ is rotor diameter. Which means that the thrust produced by each motor is proportional to the propeller speed squared.
So if $n$ is the speed of all propellers before correction, the thrust of the clockwise propellers after correction will be proportional to $(n+\Delta)^2$ and the thrust produced by the counterclockwise propellers to $(n-\Delta)^2$. The total thrust for these 2 propellers will be proportional to:
$(n+\Delta)^2 + (n-\Delta)^2 = 2n^2 + 2\Delta^2$
As you can see, there is an increase of $2\Delta^2$ in the overall thrust produced by those 2 propellers (and $4\Delta^2$ when we take the 4 propellers into account). Of course, in real life, when we control the yaw the quadcopter does not go up.
So what am I missing?
(the same stands for roll and pitch control but since the quadcopter turns around the roll or pitch axis, the total thrust is no longer entirely on the vertical axis and I could imagine that the projection on the vertical axis is not increasing, but that does not work with yaw)
|
Are events like Robocup advantageous to the development of robotic advancement?
Or is it merely entertainment which advances robotics by allowing entry level participation which helps maintain interest?
Do the DARPA Grand's provide a better vehicle for advancement? (pun intended)
|
I'm very new to create 2. I want to send commands using Bluetooth. I have already bought the bluetooth USB radio. What other devices do I need to get or how can I set up sending commands over bluetooth. Any help is appreciated.
Thanks.
|
If I have a robot path in 2D space,
i.e. a vector of (x,y) locations, and I need to generate artificial IMU data (simulate them), how would I go about it?
How do I model equations to generate the values given a time frame and positions?
I've come across imusim I'd like to know how to model them and generate using Matlab or something similar.
|
Someone could tell me if there are wearable devices such as glasses, with sensors that can detect eye movement?
In particular, I would need a device like google glass, having a sensor or a camera that is facing the eye, and it can capture the movement, possibly interfaced with a mobile device.
Alternatively, are there micro-cameras on the market, which can be connected via Bluetooth or USB to a mobile device?
|
I would likte to find the joints positions using joint angles, link lengths etc.
How can I define the position of the each joint using DH parameters?
|
I am trying to set up my stereo vision system on a car. However, I meet several problems and do not know how to solve them.
How to select the baseline? I want the distance measurement to be far at 30 or 50 meters and near at around 5-10m. Is it possible to choose a baseline that meets my requirement?
I have tried stereo calibration of two cameras and also learned how to compute depth value from disparity map. However I don't know how to compute depth value if the focal lengths of the two cameras are different. It seems all the theorems I can find on the Web only concern cameras of the same focal length.
|
So the idea is that there would be one robot acting as overwatch, which would detect all of the obstacles in an area (which are not necessarily static), and then send the data about the obstacles' positions to another robot that would navigate around the obstacles to a goal.
My initial thought was to have the overwatch robot be in an elevated position in the centre of the area, then sweep around using an ultrasonic sensor. This way, it could keep track of the obstacles in a set of polar coordinates (distance, angle). But then I realised that this method doesn't account for collinear obstacles.
So the question is, what is the best way to detect a bunch of non-static obstacles within an area?
As a side note, I have seen a system similar to this, where there was a robot detecting obstacles (in that case, a crowd of people) and another robot pathfinding around the obstacles (the people), but I'm unsure exactly how that system was detecting the obstacles.
|
I have a generic problem to create a controller for the following system:
$$\ddot{x}(t) = a y(t)$$
where $a$ is a constant real value.
The system could be seen as an equivalent of a mass-spring-damper system, where damper and spring are removed. Also $x(t)$ is the $x$ dimension and $y$ is simply the force moving the mass. BUT in this case I need to drive the force using $x(t)$ and not the contrary.
Transforming according Laplace I get:
$$ y(t) = \frac{1}{a}\ddot{x}(t)$$
$$ Y(s) = \frac{1}{a}s^{2}X(s)$$
$$ G(s) = \frac{Y(s)}{X(s)} = \frac{s^{2}}{a}$$
Considering that $a = 1$ I implemented a possible example in Simulink.
Please not that I put the output given by the scope for showing up the resulting answer of the system.
So I have 2 questions:
Is it possible to develop such a system? As far as I know the degree of the numerator should be $=<$ the degree of the denominator. So is the above system possible?
Is it possible to create a PID or PD controller to stabilize the output of the system?
Regards
|
I want to make a component that will be a square plate that will behave like it has a motorized hinge on all four sides. That is, it can "open" by pivoting around any one of its four sides. I want it to pivot by up to 45 degrees.
I thought about designing it so that 3 hinges could be detached while one pivots, but I wonder if there's a simpler way to do this.
|
I'm working on an robot that would be able to navigate through a maze, avoid obstacles and identify some of the objects in it. I have a monochromatic bitmap of the maze, that is supposed to be used in the robot navigation.
I am just a first year electrical engineering student, and so need help on how I can use the bmp image. I will be making my robot using the Arduino mega microcontroller.
So how should I get started on it.
If you need me to elaborate on anything kindly say so.
Link: http://ceme.nust.edu.pk/nerc/files/theme_ind_2015.pdf
|
I am using a tri-axis accelerometer and tri-axis gyroscope to measure the linear acceleration of a body. I need to get the orientation of the body in euler form in order to rotate the accelerometer readings from the body frame into the earth frame. Please help I'm so stuck
|
How to make a robot move using Arduino other than timing to predefined locations? and without the use of sensors?? I want to make my car move to different locations on a board..want to know the possible options without using sensors and encoders??
And how does cartesian robot work for predefined locations..does it require sensor too?
|
SLAM noob here but trying to implement an algorithm that fuses odometry data and mapping based on wifi signal strengths for a 2D robot.
1)
After various readings of different resources,
I came across this - http://www.qucosa.de/fileadmin/data/qucosa/documents/8644/Dissertation_Niko_Suenderhauf.pdf
that explained what sensors are used in mapping and how they are categorized.
There are range-bearing sensors (stereo cameras,RGB-d cameras) that provide both distance and angle (range and bearing), from which is easy to locate (x,y) coordinates of landmarks ---> I can develop a map.
But in case I'm using wifi signal strengths (Received signal strengths) etc, in which case it is range-only (meaning, I can only establish from a robot pose(x,y,theta) as to how far this signal is coming from), how am I developing a map at all?
My question is similar to this - What algorithm can I use for constructing a map of an explored area using a number of ultrasound sensors? but not quite same.
Even if I were using IMU/GPS, how am I using GPS to develop a map? What is my state space there? If I am getting GPS signals / wifi signals/ radio signals, am I estimating the transmitter/AP's location as the map? or the walls of a room I'm navigating in, as a map?
A lot of SLAM literature talks about motion model and measurement model, the former gives me the pose of the robot quite easily because of the odometry and imu.
The latter though is more for development of a map. Am I right in understanding this? If yes, say
a] I have walls in a room and I'm using Lidar scanner -
this still gives me the location of the wall using the number of beams that give me bearing, and the average distance from all the beams.
b] Or if I have just a single laser scanner, I can still use a camera (distance) and the heading of the robot to calculate the location of wall (the map). https://shaneormonde.wordpress.com/2014/01/25/webcam-laser-rangefinder/#more-403
But If I have wireless signal strengths, I have a distance (distance of the transmitter from which I'm getting the RSS, not the distance of the wall) as to where they are coming from. But how am I estimating the location of walls here?
2) What does the term "correspondences" mean in SLAM literature?
|
I designed a mini quadcopter which is about 4.5x4.5cm(Main Body). The main body is the PCB.
![enter image description here][1]
It weighs about ~20 grams with the battery. I'm using the MPU6050 with the DMP using the i2cDevLib. I am using raw radians for pitch, roll, and yaw these measures are read from the MPU6050's DMP. The motors are attached to the body using electrical tape(Black thing around motors). The motors are 8.5mm in diameter and are driven by a n-channel mosfet. The mode of control right now is bluetooth(HC-05 module). The code being used is my own.
I have a control loop on all axes, the pitch and roll have the same values since the quadcopter is symmetrical. The problem I have is that PID tuning is next to impossible, the best I got was a ~2 second flight ([Video in slow-motion][2]).
At first I was using my own code for the control loop, but it wasn't as effective as the Arduino PID library.
The output of the PID loops are mapped to -90 to 90 on all axes. This can be seen in the code
myPID.SetOutputLimits(-90, 90); //Y angle
myPID1.SetOutputLimits(-90, 90); // X angle
myPID2.SetOutputLimits(-90, 90); // Yaw angle
myPID.SetMode(AUTOMATIC);
myPID1.SetMode(AUTOMATIC);
myPID2.SetMode(AUTOMATIC);
My full code is below, but what do you think the problem is?
Code
http://pastebin.com/cnG6VXr8
|
I need to make shallow (max 2m) underwater wireless sensor network. Data payload is about 10kB/s. I know that VLF band (~3-30kHz)could be the best solutions for that, but cause of time-to-market I cannot make hardware and software from the ground.
Maybe someone could share own-self experience in this filed. If band 100-900MHz could be enough to send 10kB/s from one device to another - from 2m underwater to over a dozen cm from water surface? Maybe some IC for ultrasonic communication exist? Another ideas?
|
I have a my mobile robot and plan to use the dynamic window approach to collision avoidance. I have read the paper ,but have one inequality i can't derive it.
could you tell me? thanks!
|
Since the day I bought it I always use ethernet over USB connection, now I need to use RJ45 LAN cable to connect Beaglebone from my laptop, but my laptop can't even detect LAN connection from it, what could go wrong? Do I need straight or crossover cable? Do I need to configure something first on my BeagleBone?
UPDATE: Managed to connect it through Crossover cable and assign it IP
address by running DHCP server on my laptop.
As seen above my laptop assign IP 169.254.223.76, but when I tried to
connect to that IP using puTTY it gives me connection refused.
Please help.
|
I've been implementing an extended kalman filter, following Thrun's Probabilistic Robotics implementation. I believe my correct step may be wrong, as the state appears to be corrected far too much.
Here's a screen capture showing the issue https://youtu.be/gkSpFK27yvg
Note, the bottom status reading is the 'corrected' pose coordinates.
This is my correct step:
def correct(self, reobservedLandmarks):
for landmark in reobservedLandmarks:
storedLandmark = self.getLandmark(landmark.id)
z = Point(landmark.dist, math.radians(landmark.angle))
h, q = self.sensorModel(storedLandmark)
inv = np.array([[z.x-h.x], [wrap_radians(z.y-h.y)]])
JH = np.zeros([2, 3 + (self.landmarkCount*2)])
JH[1][2] = -1.0/q
JH[0][0] = -((self.X[0] - storedLandmark.x) / math.sqrt(q))
JH[0][1] = -((self.X[1] - storedLandmark.y) / math.sqrt(q))
JH[1][0] = (storedLandmark.y - self.X[1]) / q
JH[1][1] = -((storedLandmark.x - self.X[0]) / q)
JH[0][3+(landmark.id*2)] = -JH[0][0]
JH[0][4+(landmark.id*2)] = -JH[0][1]
JH[1][3+(landmark.id*2)] = -JH[1][0]
JH[1][4+(landmark.id*2)] = -JH[1][1]
R = np.array([[landmark.dist*self.sensorDistError, 0],[0, self.sensorAngleError]])
Z = matmult(JH, self.P, JH.T) + R
K = matmult(self.P, JH.T, np.linalg.inv(Z))
self.X = self.X + matmult(K, inv)
self.P = matmult((np.identity(self.X.shape[0]) - matmult(K, JH)), self.P)
h = The range and bearing of state landmark.
q = (landmark.x - self.X[0])^2 + (landmark.y - self.X[1])^2
My sensor covariance errors are 1cm per meter, and pi/180 for the bearing. My assumption was that the correction should be relative to the size of the robot's pose error. Which is very small in this example, as it only moved forward less than 30cm.
Is the kalman gain applied correctly here, and if yes, what other factors would result in this 'over-correcting'?
Thanks.
|
I have a simulated robot moving in a discretized 2D grid world that (for various simplification and time-restriction reasons) has no noise. The problem is how the robot creates its initial map of the world. Algorithms like SLAM and occupancy grid mapping are based on uncertainty, but in this case there is no uncertainty.
So I'm wondering if there is a relatively simple algorithm for mapping the environment with noiseless position.
|
I recently bought a USB 2.0 Bluetooth Adapter. It claims to have support from Linux kernels of versions 3.4 and higher. I have a BeagleBone Black with Debian GNU/Linux 7 image and kernel 3.8. I am developing on BeagleBone Black by hosting it through USB with ssh.
I have tried both hot plugging and plugging in before boot and failed.
Then, I tried this tutorial. However, I cannot find the connman directory on my BeagleBone Black device. I looked up and assumed I needed to install the connman package, but my BeagleBone Black has no internet access.
I have also tried lsusb -v, as suggested by an answer of a similar question to this, with no luck. The weird thing is, while lsusb itself prints
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
lsusb -v only prints
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
then hangs. Information regarding bus 002, which I believe the device is connected to, is not printed. I have to restart the ssh connection to get back to work.
How should I approach to get the dongle to work on my BeagleBone Black? If the connman package is sufficient, how do I install it on my BeagleBone Black without internet access. Why does lsusb -v hang?
Any help is appreciated!
|
I wanted to calculate the amount of thrust generated from the engines. I am using the blade 180 cfx model.
http://www.bladehelis.com/Products/Default.aspx?ProdID=BLH3450#quickSpecs
After some research I have found a way to calculate the thrust using:
T = ( pi D^2 rho P^2)/2 where P is the Power multiplier and can be calculated using: P= prop constant * (rpm/100)^power factor
I am unable to find the values for the Prop constant and the power factor. Is there a way I can get this information? Or an alternative way to calculate the thrust generated?
|
First of all, I am in high school[to tell you that I am a newbie and lack knowledge]
What I want to achieve for now is a thing that can differentiate between poly bags[polyethylene] and other stuffs. Or a thing that could detect polyethylene.
I have to built a robot and therefore we have only a few method accessible.
Anyway any knowledge or suggestion or external links provided by you, about this topic would be welcomed by me.
|
I was looking for an implementation of a PID controller in Java and I found this one:
https://code.google.com/p/frcteam443/source/browse/trunk/2010_Post_Season/Geisebot/src/freelancelibj/PIDController.java?r=17
So, for what I could understand about it I am using it this way:
package lol.feedback;
public class dsfdsf {
public static void main(String[] args) throws InterruptedException {
final PIDController pidController = new PIDController(1, 1, 1);
pidController.setInputRange(0, 200); // The input limits
pidController.setOutputRange(50, 100); // The output limits
pidController.setSetpoint(120); // My target value (PID should minimize the error between the input and this value)
pidController.enable();
double input = 0;
double output = 0;
while (true) {
input = output + 30;
pidController.getInput(input);
output = pidController.performPID();
System.out.println("Input: " + input + " | Output: " + output + " | Error: " + pidController.getError());
Thread.sleep(1000);
}
}
}
But he never stabilizes. He doesn't behave like a PID at all... This is the output I get:
Input: 30.0 | Output: 100.0 | Error: 90.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Input: 80.0 | Output: 100.0 | Error: 40.0
Input: 130.0 | Output: 50.0 | Error: -10.0
Can someone help me tell me what I am missing?
Thank you!
|
Quadcopter frames seem to consistently follow the same X design. For example:
I'm curious to know why that is. It certainly seems like the most efficient way to use space but is it the only frame design that would work for quadcopters?
For instance, would a design like this work?
Why or why not?
|
I want to submit my gains for the PID regulator via MAVLink.
Unfortunately, I am not very used to MAVLink and there are several functions which may be used for that purpose (I think). My string is currently JSON formatted and I was directly sending it to the serial port before.
Is there a straight forward way to submit the data like it is (see below) with MAVLink, or is it better not to transfer a JSON string with MAVLink and submit each single value? If yes, what is the function of choice.
So far I noticed that for most of the sensors, there are already MAVLink function defined. For the PID gains I found not so much.
AP_HAL::UARTDriver *pOut = uartX == UART_C ? hal.uartC : hal.uartA;
pOut->printf( "{\"t\":\"pid_cnf\","
"\"p_rkp\":%.2f,\"p_rki\":%.2f,\"p_rkd\":%.4f,\"p_rimax\":%.2f,"
"\"r_rkp\":%.2f,\"r_rki\":%.2f,\"r_rkd\":%.4f,\"r_rimax\":%.2f,"
"\"y_rkp\":%.2f,\"y_rki\":%.2f,\"y_rkd\":%.4f,\"y_rimax\":%.2f,"
"\"p_skp\":%.2f,\"r_skp\":%.2f,\"y_skp\":%.4f}\n",
static_cast<double>(pit_rkp), static_cast<double>(pit_rki), static_cast<double>(pit_rkd), static_cast<double>(pit_rimax),
static_cast<double>(rol_rkp), static_cast<double>(rol_rki), static_cast<double>(rol_rkd), static_cast<double>(rol_rimax),
static_cast<double>(yaw_rkp), static_cast<double>(yaw_rki), static_cast<double>(yaw_rkd), static_cast<double>(yaw_rimax),
static_cast<double>(pit_skp), static_cast<double>(rol_skp), static_cast<double>(yaw_skp) );
|
The dominant approach for solving ODE in control systems books is ode45 since the majority of these books use Matlab. I'm not acquainted with how the ode45 works but lately I started reading about Euler's method in this book Numerical Methods for Engineers. If the step size is very small, then the results are satisfactory. For simulation, one can actually set the step size to be very small value. I've used ode45 in here for regulation and tracking problems. I faced some difficulties for using ode45 for tracking problem since the step size is not fixed. Now for the same experiment, I've used the Euler's method with step size 0.001 sec. The results are amazing and so friendly in comparison with ode45. This is a snapshot from the result
And this is the code
clear all;
clc;
dt = 0.001;
t = 0;
% initial values of the system
a = 0; % angular displacement
da = 0; % angular velocity
% PID tuning
Kp = 50;
Kd = 18.0;
Ki = 0.08;
error = 0;
%System Parameters:
m = 0.5; % mass (Kg)
d = 0.0023e-6; % viscous friction coefficient
L = 1; % arm length (m)
I = 1/3*m*L^2; % inertia seen at the rotation axis. (Kg.m^2)
g = 9.81; % acceleration due to gravity m/s^2
% Generate Desired Trajectory
y = 0:dt:(3*pi)/2;
AngDes = y; % Ang: angle , Des: desired
AngDesPrev = 0;
for i = 1:numel(y)
% get the first derviative of the desired angle using Euler method.
dAngDes = ( AngDes(i) - AngDesPrev )/ dt;
AngDesPrev = AngDes(i);
% torque input
u = Kp*( AngDes(i) - a ) + Kd*( dAngDes - da ) + Ki*error;
% accumulated error
error = error + ( AngDes(i) - a );
%store the erro
E(i) = ( a - AngDes(i) );
T(i) = t;
dda = 1/I*(u - d*da - m*g*L*sin(a));
% get the function and its first dervative
da = da + dda*dt;
a = a + da*dt;
%store data for furhter investigation
A(i) = a;
dA(i) = da;
t = t + dt;
end
plot(T, AngDes, 'b', T, A, 'g', 'LineWidth', 1.0)
h = legend('$\theta_{d}(t)$', '$\theta(t)$');
set(h, 'Interpreter','LaTex')
My question is why ode45 is preferred in many control books assuming the step size is very small.
|
When choosing a battery for a robot, should you use a LiPo or LiFePo?
For LiFePo, the pros:
can deliver higher sustained amps
many are built to be drop-in replacements for lead-acid batteries and can use the same charger
The cons:
enormously expensive (about $1/watt*hour/kg)
lower energy density than LiPo (around 110 watt*hour/kg)
For LiPo batteries, the pros:
cheaper (about $0.2/watt*hour/kg)
over twice the energy density than LiFePo (around 250 watt*hour/kg)
The cons:
more complicated and unsafe to charge (see videos of LiPos catching on fire)
most can't safely deliver high amps
Is there anything I'm missing? I see LiFePo batteries used on a lot of larger platforms, probably due to the higher continuous amp rating. I see Ebay flooded with tons of cheap high-capacity Chinese LiPos, but almost none of them have documentation, which probably means they're junk.
When should I use LiFePo vs LiPo?
|
I'm working in a project implementing a vision system. I'm a student and this is the first time I'm doing something like this, it has been a challenge.
I'm using a controller (Netduino+2, .Net MicroFramework) and a camera (CmuCam5 - Pixy) and for now it's working well. I'm communicating with the robot(Fanuc M430iA) using Modbus, and aquiring the data from the camera using I2C.
But, the next challenge is using 2 cameras to implement stereo vision and I'm not shure how to achieve that. I'm reading a lot about that and I understand the process and generally how it works, but I think my case is very specific.
My cameras detect the center of an object and give me the coordinates, so, I have that, and that's good.
What do you think it's the more reasonable approach?
(sorry for my english, let me know if I'm not being explicit, I'll edit the question if I see there's not enough information)
|
I am a student who is currently working on a computer science project that will require soon computer vision and more specifically stereoscopy (for close depth detection). I am now looking for a great camera to do the job and I found several interesting options:
1- A custom built set of two cheap cameras (i.e. webcam);
2- The old, classic, economic and proven Kinect;
3- Specialized stereo sensors.
I found a couple months ago this sensor: https://duo3d.com/product/duo-mini-lv1
I tought it was interesting because it is small, stereoscopic and brand new (encouraging a fresh USA company). However, if we take apart the additional APIs that come with it, I just don't understand why you would buy this when a Kinect (or "cheap" cameras) are at least 4-5 times less expensive and still have great if not better specifications.
Same applies for this one: http://www.ptgrey.com/bumblebee2-firewire-stereo-vision-camera-systems
Can someone please explain to me why I would need such a device and if not, why they are needed for some?
|
I wonder is there any simple (can be computed in microcontroller level) option which is suitable for 3d object perception (depth, position, pose or coordinate estimation) of flying robots except LIDAR, stereovision, omnidirectional camera, laser scanner or any other machine vision based techniques
|
My department recently purchased Irobot create 2. We want to recreate the code from the Csharp create 2 driving Tether program to use as a base for our intro to computer science course. Currently the code we are using to talk to the Irobot is http://www.robotappstore.com/Knowledge-Base/How-to-program-Roomba-for-NET-developers/23.html. Not sure if the irobot is getting the commands as well as if the serial port is making a connection. We are using Visual Studio 2012 as the programming environment. Any recommendation and or input would be appreciated.
Thank you
|
am 2 weeks old to arduino projects..i had been using timing all this while to control my rover...now, i wanted to shift to using encoders..am facing quite some problems..am using arduino uno and a two amp motor shield..this is code i am trying to use..am using a 8V Li-po battery
http://www.myduino.com/index.php?route=product/product&product_id=170&search=rover+5 (link to rover)
http://www.myduino.com/index.php?route=product/product&product_id=131&search=motor+shield (link to motoshield)
my question is there are four pins coming out of encoders from each side...what i did was connected the red and black to 5V and GND respectively and the white and yellow of the first encoder to pin 2 and the white and yellow of second encoder to pin 3...is what am doing correct??
and sometimes when i use this code, in the motorshield both the green and red light starts thereby stalling the motor..why does that happen?
can anyone of you suggest a link to a simple encoder code to make the motors move forward in a straight using feedback..
thanks
// interupt 0 -> pin 2
// interupt 1 -> pin 3
volatile unsigned long positionL = 0; //vehicle position count from left encoder
volatile unsigned long positionR = 0; //vehicle position count from right encoder
int motorLa = 5;
int dirLa = 4;
int motorRa = 7;
int dirRa = 6;
void setup(){
pinMode (motorLa, OUTPUT);
pinMode (dirLa, OUTPUT);
pinMode (motorRa, OUTPUT);
pinMode (dirRa, OUTPUT);
Serial.begin(9600);
}
void loop(){
moveFWD(5300);
delay(2000);
moveREV(3000);
delay(2000);
while(1);
}
void encoder1(){
positionR++;
}
void encoder2(){
positionL++;
}
void moveFWD(unsigned int x){
positionL=0;
positionR=0;
attachInterrupt(0, encoder1, CHANGE);
attachInterrupt(1, encoder2, CHANGE);
digitalWrite(dirLa, LOW); // Left a Forward
digitalWrite(dirRa, HIGH); //Right a Forward
while((positionL <= x) || (positionR <= x)){
if (positionL > positionR){
analogWrite(motorLa, 220);
analogWrite(motorRa, 255);
}
else if (positionR > positionL){
analogWrite(motorRa, 220);
analogWrite(motorLa, 255); // Sets the motor speed at a value of 180 }
else {
analogWrite(motorRa, 255);
analogWrite(motorLa, 255); // Sets the motor speed at a value of 180 }
Serial.print(positionL); // This prints the current value of positionL in the serial monitor on the computer.
Serial.print("\t"); // This creates a tab on the monitor
Serial.print(positionR);
Serial.println(); // This creates a new line to print on
}
// Stop all motors
analogWrite(motorLa, 0);
analogWrite(motorRa, 0);
// Disables the encoders interrupt
detachInterrupt(0);
detachInterrupt(1);
}
void moveREV(unsigned int x){
positionL=0;
positionR=0;
attachInterrupt(0, encoder1, CHANGE);
attachInterrupt(1, encoder2, CHANGE);
digitalWrite(dirLa, HIGH); // Left a Forward
digitalWrite(dirRa, LOW); //Right a Forward
while((positionL <= x) || (positionR <= x))
{
if (positionL > positionR){
analogWrite(motorLa, 20);
analogWrite(motorRa, 200);
}
else if (positionR > positionL){
analogWrite(motorRa, 20);
analogWrite(motorLa, 200); // Sets the motor speed at a value of 180 }
else{
analogWrite(motorLa, 200); // Sets the motor speed at a value of 180 analogWrite(motorRa, 200);
}
Serial.print(positionL); // This prints the current value of positionL in the serial monitor on the computer.
Serial.print("\t"); // This creates a tab on the monitor
Serial.print(positionR);
Serial.println(); // This creates a new line to print on
}
// Stop all motors
analogWrite(motorLa, 0);
analogWrite(motorRa, 0);
// Disables the encoders interrupt
detachInterrupt(0);
detachInterrupt(1);
}
|
I would like to compare my results of visual Odometry with the groundtruth provided by the KITTI dataset.
For each frame in the groundthruth, i have a projection matrix.
For example:
1.000000e+00 9.043683e-12 2.326809e-11 1.110223e-16 9.043683e-12 1.000000e+00 2.392370e-10 2.220446e-16 2.326810e-11 2.392370e-10 9.999999e-01 -2.220446e-16
Here the instructions provided by the readme:
Row i represents the i'th pose of the
left camera coordinate system (i.e., z
pointing forwards) via a 3x4
transformation matrix. The matrices
are stored in row aligned order (the
first entries correspond to the first
row), and take a point in the i'th
coordinate system and project it into
the first (=0th) coordinate system.
Hence, the translational part (3x1
vector of column 4) corresponds to the
pose of the left camera coordinate
system in the i'th frame with respect
to the first (=0th) frame
But I don't know how to produce the same kind of data for me.
What I have for each frame in my case:
The Tf transformation from the init_camera (the fix one from the (0,0,0)) to the left camera which is moving. So I have the translation vector and the quaternion rotation.
The odometry data: the pose and the twist
Camera calibration parameters
With those data, How I compare with the groundtruth ? So I need to find the projection matrix from the data above but don't know how to do it.
Can someone help me ?
Thank
|
I'm trying to implement two PIDs for stabilizing quadrotor for position tracking. The inputs are $x_{d}(t), y_{d}(t), z_{d}(t)$ and $\psi_{d}(t)$. For position tracking, usually the small angle assumption is assumed. This assumption allows for acquiring $\theta_{d}$ and $\phi_{d}$. These are the results
The x-axis position is driving me crazy. After alot of attempts for tuning the PIDs, I felt something wrong is going on. Is this a normal behavior for PID controller? Also, what I've noticed is that once $\psi$ reaches to zero, the platform starts oscillating (after 1.5 second in the figure).
For solving ODEs and computing the derivatives for the velocities, I use Euler methods.
It is simulation in Matlab.
|
I am trying to determine the depth of view for a hypercatadioptric camera (camera lens system and a hyperbolic mirror) based on 1.
The following illustration seems pretty clear. For an image point $p$, we are looking for a virtual point $p_u$, given the parameters of the optical system.
I have troubles finding the right equations in the paper, though. There is the distribution of a virtual point which seems to be connected to $p_u$, but is not defined anywhere.
My goal is to replicate the diagrams they have later in the paper, like e.g. this one:
Which for a mirror (blue) gives the virtual image points of the scene (red).
I would like to calculate the depth of view, so the area at which the image blur is below a threshold.
1 Zhang, S., Zenou, E.: Optical approach of a hypercatadioptric system depth of
field. In: 10th International Conference on Information Sciences, Signal Processing
and their Applications
|
A few days ago, I just shared my concerns about the price of computer vision hardware on this same exact forum (see What main factors/features explain the high price of most industrial computer vision hardware?) and I think a new but related post is needed. So here we go.
Here are some details to consider regarding the overall scanner I want to build:
Restricted space: my overall scanner can't be larger than 3 feet cube.
Small objects: the objects I will be scanning shouldn't be larger than 1 foot cube.
Close range: the camera would be positioned approximately 1 foot from the object.
Indoor: I could have a dedicated light source attached to the camera (which might be fixed in a dark box)
Here are the stereo cameras/sensors I was looking at (ordered by price):
Two Logitech webcams (no model in particular)
Cheap
Harder to setup and calibrate
Need to create your own API
Built for: what you want to achieve
Intel RealSense: http://click.intel.com/intel-realsense-developer-kit.html
$100
High resolution: 1080p (maybe not for depth sensing)
Workable minimum range: 0.2 m
Unspecified FOV
Built for: hands and fingers tracking
Kinect 2.0: https://www.microsoft.com/en-us/kinectforwindows/
$150
Low resolution (for depth sensing): 512 x 424
Unworkable minimum range: 0.5 m
Excellent FOV: 70° horizontal, 60° vertical
Built for: body tracking
Structure Sensor http://structure.io/developers
$380
Normal resolution with high FPS capability: 640 x 480 @ 60 FPS
Unspecified minimum range
Good FOV: 58° horizontal, 45° vertical
Built for: 3D scanning (tablets and mobile devices)
ZED Camera: https://www.stereolabs.com/zed/specs/
$450
Extreme resolution with high FPS capability: 2.2K @ 15 FPS (even for depth sensing) and 720p @ 60 fps
Unviable minimum range: 1.5 m
Outstanding FOV: 110°
Built for: human vision simulation
DUO Mini LX: https://duo3d.com/product/duo-minilx-lv1
$595
Normal resolution with high FPS capability: 640 x 480 @ 60 FPS
Workable minimum range: 0.25 m (see https://stackoverflow.com/questions/27581142/duo-3d-mini-sensor-by-code-laboratories)
Phenomenal FOV: 170° (with low distortion)
Built for: general engineering
Bumblebee2: http://www.ptgrey.com/bumblebee2-firewire-stereo-vision-camera-systems
Too much expensive (not even worth mentioning)
Note: All prices are in date of April 18th 2015 and might change overtime.
As you can see, some have really goods pros, but none seems to be perfect for the task. In fact, the ZED seems to have the best specifications overall, but lacks of minimum range (since it is a large baselined camera designed for long range applications). Also, the DUO Mini LX seems to be the best for my situation, but unlike the ZED which generates really accurate depth maps, this one seems to lack of precision (lower resolution). It might be good for proximity detection, but not for 3D scanning (in my opinion). I could also try to build my own experimental stereo camera with two simple webcams, but I don't know where to start and I don't think I will have enough time to deal with all the problems I would face doing so. I am now stuck in a great dilemma.
Here are my questions:
What good resources on the internet give you a good introduction on 3D scanning concepts (theoretically and programmatically)? I will be using C++ and OpenCV (I already worked with both a lot) and/or the API provided with the chosen camera (if applies).
Should you have a static camera capturing a moving object or a moving camera capturing a static object?
Should I use something in conjunction with stereo camera (like lasers)?
Is it profitable to use more than two cameras/sensors?
Are resolution, FPS and global shuttering really important in 3D scanning?
What camera should I get (it can also be something I didn't mention, in the range of $500 maximum if possible)? My main criteria is a camera that would be able to generate an accurate depth map from close range points.
Thanks for your help!
|
I working on sliding mode control (SMC) of a 4 DoF manipulator, I don't know
how to select the discontinuity gain matrix, $ K$ , the surface constant (the diagonal gain matrix $\Lambda$ components).
|
I have an omni-directional robot, such as a X-Drive or mecanum Drive that I need to track the position of. I can put encoders on the wheels, but that is all I can do in terms of the sensors. I have no external beacons that I can link to. The issue is that I needed to keep track of X-Y position, including strafing, and my heading. Does anyone have any resources that could help me with this.
|
I can't really find a real straightforward tutorial for that. There are a lot for Arduino but I only have an original beaglebone, an ESC, and brushless motor with me. Please help.
|
My question is more on a basic/conceptual level.
I'm looking into a way to approach an object in map, that I have detected earlier. My robot is localized in a map using SLAM. And object position is 2D point that I recieve from my algorithm. (Object is actually a face picture on a wall). Is there a smart way to approach the point and "look" at it?
|
how to convert the value you get for the angle (packet ID 20) into degrees?
i am using the create2 robot and when I did not understand the data I am getting back. The documentation it says it's in degrees but what I get back is a huge number like 4864 when I turned the robot just 45 degrees.
|
I saw a high end robot arm once that I could move and bend it to any form I wanted with no resistance, as if the robot arm didn't have any weight. I'd like to know more about them. What is this class of robot arm design called? Where can I get some more information about its design and applications?
|
I want to use a Raspberry Pi to pan a camera mounted on a servo wirelessly from ~100 feet away. What are some good servos and transceivers for this?
To clarify, there should be no physical connection between the RasPi and servo.
Do I need an additional RasPi on the servo end?
|
How is a new team strategy during a robocup competition sent to each player of a robot team? Robots in the Standard Platform League (i.e. SPL), for example, are fully autonomous and there is no connection with non-team members (except pulling from the GameControl).
|
The GMIS (General Machine Intelligence System) from a new article posted at codeproject.com looks interesting. Do you think that it could be a breakthrough in the field of robotics
|
currently i am programming for a robotic simulation. I have a Endeffector which aproaches a target, on the way to the target is an Obstacle. Now i redirect my Endeffector, so that it does not hit the target.
When i want to do the same for the whole arm i want to push the arm away from the Obstacle as well. Now i have it working so far that i can redirect the arm. But my calculation for the Jacobian seems to be faulty.
For my setup, and what i need for that.
I have a robotic arm, 7DOF. Let $x_0$ be the closest point on the arm to the obstacle. And $J_0$ the corresponding Jacobian.
Also i have given the following term:
$\dot{x_0} = J_0 * \dot{\theta}$
$\theta$ are my joint angles. I can calculate the Jacobian for the EndEffector, but do not know on how to calculate it for a point ob the arm.
Does anybody have an Idea on how to calculate the corresponding Jacobian.
Cheers
|
I have 3 ultrasonic Sensor (HC-SR04), i want to use one of them as transmitter, and the other as receiver, i want to let the first one send ultra Sonic waves and the other receive these waves from the same transmitter.
how can i do that ?
i tried to send trigger for each ultrasonic and connect them on different pins on PIC, but its now work.
its something like this project but using HC-SR04
|
how do you calculate the PID values and stabilise the quadcopter using the on board sensors The gyro accelerometer and magnetometer
|
I was just wondering if it was possible to buy or build a programmable drone with a robotic arm,hand, knife.
I want to program a drone to harvest crops.
-object recognition from live video stream to server
-identify and grab objects with arm, make cut if necessary
-transport produce to collection site
I know this would take much knowlege from many fields but do any you have any forsight into the limitations of doing this other than energy for power.
Estimates on cost of hardware?
|
i am working on my master's thesis about design and construction of universal robotic arm.
Goal of my work is to design 5 DOF robotic arm. Something like on the picture:
I need it to be able to lift a weight about 5kg. It has to move in "action radius" 1m. Rotation speed should be about 1m/s. The conclusion of my work should be like: "You can buy ABB robotic arm or you can but this..it can lift that much, can turn that speed and weighs that much". Basic construction should be done too. Maybe with some simulation.
First of all - i picked really bad master's thesis for me, i know that know.
Second - i have like month to finish it.
I would like to ask someone how to proceed.
I know that first step is to pick servos/actuators/gearboxes, but which one?
What is realistic weight of the whole arm which should lift another 5 kg of weight? How strong motors should i pick or with what gearboxes?
Is anyone able to help me via maybe emails?
|
I know there are lots of consumer depth image sensors: kinect, primesense, structure.io, leap motion, ... But I'm looking for something that is more suitable for integration into robot, something without case or with proper mount, compact and available for at least next five years if robot is going to production. Something similar to the sensors used in this drone https://www.youtube.com/watch?v=Gj-5RNdUz3I
|
I have a FIRST Robotics spec National Instruments cRIO. I would like to connect a USB wireless Xbox controller to it in order to control it from a distance with minimal extra hardware (which is why I am not using the more traditional WiFi radio method). To this point I have been able to find either
A. A sidecar for the cRIO which allows it to act as a USB host or
B. A method that does not use NI specific hardware to connect the two together
If someone who is knowledgeable on the subjects of industrial system and robot control could provide some assistance that would be greatly appreciated, thanks!
|
Guys I am making a home automation system very simple one with infrared remote ccontrol of tv remote Now the problem is I wanna buy a some relays to switch 230V AC using my arduino board but can't understand which one to buy I don't want to buy relay module but I wanna buy relay.
|
I have a quadcopter using a MultiWii(Arduino Mega) controller. I am curious if there is any way to connect it with ROS capable RPI. (That I could add to the quad itself).
|
I would like to build a 70cm articulated robotic arm (not a scara one) that can lift between 10kg and 15kg (10kg would be awesome already, this payload includes the weight of the arm+gripper) and moves at 1 m/s (dreaming again :)). The goal is to make it similar to an human arm since I want to control it "remotely", so the joint should not be able to rotate more than my arm ^^
So I know that I cannot uses servos available (like the overpriced dynamixel ones..) with that payload. I also already excluded common linear actuators and less common ones like the pneumatic actuators (because of latency).
From what I read arms like the baxter ones uses elastic series actuators, so I guess that I should go that way, but there aren't a lot of details on how that works (getting a lot of 100x100 photos where you see nothing) and I have a lot of questions.
The only thing I could understand is that it uses 2 motors and a spring.
Do they use brushed or brushless motors? DC motors or steppers ? I read that steppers aren't that good to handle collisions and have difficulties when used at their limits.
Also, how is the spring mounted on the motor ?
To sum up, I'm collecting any experience, diagrams, intel, or documents that you have on that topic :)
PS : my budget can't exceed +/- 1500$ for that arm.
|
i want to run two HC-SR04 on one PIC16F877A and send the value mesured by the two ultrasonic to serial port.
this is my code using PIC C Compiler :
#use rs232(baud=9600,parity=N,xmit=PIN_C6,rcv=PIN_C7,bits=8)
#define e1 PIN_B6
#define t1 pin_B7
#define e2 pin_B4
#define t2 pin_B5
int a;
int distanse(int,int);
void main()
{
while(1){
int u1,u2;
u1=distanse(e1,t1);u2=distanse(e2,t2);
printf("%3u", u1);
printf("%3u", u2);
delay_ms(1000);
}
}
int distanse(int e,int t){
long long counter=0;
output_bit(t,1);delay_us(10);output_bit(t,0);
a=input(e);
while(a==0){a=input(e);}
while(a==1){counter=counter+1;a=input(e);}
return counter/3.333333;
}
but the computer received random values !
what is the problem ?
|
I am implementing a Denavit-Hartenberg forward transform for a 3-axes CNC mill. I know that the kinematic for such a machine is trivial and doesn't need DH, but I need to make appliable for other robots too. My implementation does the math correctly(I've verified that with another tool), but the transformation doesn't give me the results I would expect.
I assume that for 3-axes cartesian robot with orthogonal prismatic joints(=CNC mill) the resulting transformation matrix should give me the input parameters(d1-d3) back in its translation vector, but it somehow doesn't. Also, the resulting orientation matrix should have some "nice" values(90, 180, 270, etc.) and no odd ones(0.0528, 0.5987, etc,.).
Is my assumption wrong?
|
I'm trying to create a function that allows me to more easily start a motor, but I'm running into a problem, I don't know the type to use for the motorName argument. I'm using a VEX 269 Motor. Here's the function.
void runMotor(MotorTypeHere motorName, int speed, int time)
{
startMotor(motorName, speed);
wait(time);
}
I just don't know what type to put for the motorName argument. What type would it be?
|
I've read the AccelStepper documentation on airspayce.com and it seems to be not possible to accelerate a stepper starting with a speed greater 0. Acceleration always starts from speed=0, I tried it with several variations of the code below...
#include <AccelStepper.h>
int onOffPin = 9;
AccelStepper stepper(AccelStepper::DRIVER, 2, 10);
void setup()
{
stepper.setMaxSpeed(1000);
stepper.setSpeed(200);
stepper.setAcceleration(100);
}
void loop()
{
//turn motor on
digitalWrite( onOffPin, HIGH );
// go forwards
digitalWrite( onOffPin, HIGH );
stepper.move(1300);
stepper.runToPosition(); // stepper shall start from speed 200, but it starts from speed 0;
// Now go backwards
stepper.move(-1300);
stepper.runToPosition();
//turn motor off
digitalWrite( onOffPin, LOW );
delay(2000);
}
I also tried to set the speed in the library's method void AccelStepper::computeNewSpeed() directly, but I'm not that good in c++ and don't get it to work.
Anybody any ideas?
UPDATE
I tried to write some custom code in AccelStepper.cpp's method void AccelStepper::computeNewSpeed()
My idea was to set the speed manually during acceleration/deceleration if the speed is below my intended value. At first I thought it couldn't be a big deal, but now I see that my cpp skills seems to be not good enough or I don't understand the library quite well.
I tried
void AccelStepper::computeNewSpeed()
{
long distanceTo = distanceToGo(); // +ve is clockwise from curent location
long stepsToStop = (long)((_speed * _speed) / (2.0 * _acceleration)); // Equation 16
//now here goes my modification
if (_speed < 200.0 && _speed >= 0 ){
setSpeed(200.0);
}
//I did no modification below this comment
This results in a very slow stepper movement..
|
I've looked around but can't find the answer, to what I hope is a simple question. I'm working with a TI-SensorTag, and I want to be able measure the rotation around the unit's Z-axis. Basically I want to attach the tag to a clock pendulum, lie the clock on a table so the tag and clock face point up, and to measure the angular displacement of the pendulum as it swings back and forth. I'm hoping the mental image translated well!
My understanding is that I can solve for displacement by multiplying my gyroscope readings by my sampling period, but I'm not sure how to compensate for drift. So my questions are: is my approach sound, and is the answer to drift to use the changing x and y accelerations? Or would I need to somehow incorporate the magnetometer readings?
Thanks!
|
I have been working on trying to get the angle of the Create 2. I am trying to use this angle as a heading, which I will eventually use to control the robot. I will explain my procedure to highlight my problem.
I have the Create tethered to my computer.
I reset the Create by sending Op code [7] using RealTerm.
The output is:
bl-start
STR730
bootloader id: #x47175347 4C636FFF
bootloader info rev: #xF000
bootloader rev: #x0001
2007-05-14-1715-L
Roomba by iRobot!
str730
2012-03-22-1549-L
battery-current-zero 252
(The firmware version is somewhere in here, but I have no clue what to look for--let me know if you see it!)
I mark the robot so that I will know what the true angle change has been.
I then send the following codes [128 131 145 0x00 0x0B 0xFF 0xF5 142 6]. This code starts the robot spinning slowly in a circle and request the sensor data from the sensors in the group with Packet ID 2. The output from the Create seen in RealTerm is 0x000000000000, which makes sense.
I wait until the robot has rotated a known 360 degrees, then I send [142 2] to request the angle difference. The output is now 0x00000000005B.
The OI specs say that the angle measurement is in degrees turned since the last time the angle was sent; converting 0x5B to decimal is 91, which is certainly not 360 as expected.
What am I doing wrong here? Is the iRobot Create 2 angle measurement that atrocious, or is there some scaling factor that I am unaware of? are there any better ways to get an angle measurement?
|
I'm working on a project to make a SmartBall that can detect the velocity(km/h) , spin(degrees per second) and flightpath(trajectory) of the ball using Intel Edison with the 9DOF block (LSM9DS0 : 3-axis accelerometer, 3-axis gyroscope, and 3-axis magnetometer) & the battery block, I'm reading values from the 9DOF block by RTIMULib(Library for IMU chips). I've been working on integrating the acceleration data from the accelorometer to get the velocity then get the position, I know that this method is not really accurate as the integration error cumulate very fast but I rely on that my calculations will be done in a very short time (about 3 seconds) then i re-calculate again from the beginning after every kick so that error doesn't cumulate hardly, Also i only need an acceptable accuracy not a very high one. I discovered then that i'm dealing with projectile motion(ball kicking), so after considering this & searching in projectile motion equations i found that i must know the initial velocity and the angle of projection(theta) to be able to get my requirements. my problem that I don't know how to get any of these , I tried different approaches like getting the horizontal distance & getting the height to get their resultant(using pythagoras) then get the angle(assuming it's a right angle) in a very small time at the beggining of the projection , but i still couldn't get the height. The gyroscope outputs roll, pitch & yaw angles related to the sensor orientation but i'm still not using this as i'm assuming that the sensor will be fixed inside the ball so it's orientation will not be the same as the projection angle.Now What I really want is any approach/idea on how to get velocity & flightPath of a projectile using accelorometer and gyroscope data. Hope I made it clear , Any help on how to get my requirements is really appreciated, Thanks so much.
|
I was wondering how I could determine a robot's distance from a fixed point when the robot itself is constantly changing positions. I can keep encoders on the wheels and can also get data from a gyroscope and an accelerometer.
|
I'm wondering if there's a feature to "flip" the rotation direction with Dynamixel (I'm using MX-106). For example, if I give +1.57 to the motor, then it interprets it as -1.57. And the other way around.
I'm using its ROS driver package that doesn't seem to explicitly claim that there's a feature to do this, although there is a question where a user reported he was able to do this from source code. But I failed to replicate. So I first wanted to ask about the capability of the device itself since I don't know if the limitation comes from the Dynamixel device or from ROS driver.
Thank you.
(UPDATE) Usecase of mine is that I have multiple robots where the direction of how Dynamixel is attached is different per robot, and ideally like to flip the motor's direction at driver's level so that I can keep using the same controller software.
|
I am building a quadcopter using the Arduino Uno with a 6dof accelerometer and gyro. I will be adding a separate 3 axis magnetometer soon for heading. I have successfully implemented the code that reads the data coming in from these sensors and prints them out.
I have also checked for bias by averaging 100 measurements. My code calculates the pitch from the accel and pitch from the gyro respectively:
pitchAccel = atan2((accelResult[1] - biasAccelY) / 256, (accelResult[2] - biasAccelZ) / 256)*180.0 / (PI);
pitchGyro +=((gyroResult[0] - biasGyroX) / 14.375)*dt;
I am then using a complementary filter to fuse the two readings together like this:
pitchComp = (0.98*pitchGyro) + (pitchAccel*0.02);
I am stuck on how to proceed from here. I am using the same procedure for roll, so I now have readings for pitch and roll from their respective complementary filter outputs.
I have read a lot of articles on the DCM algorithm which relates the angles from the body reference frame to the earth reference frame. Should that be my next step here? Taking the pitch and roll readings in the body reference frame and transforming them to the earth reference frame? Repeat the entire procedure for yaw using the magnetometer? If yes, how should I go about doing the transformations?
I understand the math behind it, but I am having a hard time understanding the actual implementation of the DCM algorithm code-wise.
Any help is appreciated!
|
I'm working on my own ROV project, but I find OpenROV have a ready to use image for my BB so want to use that instead of making my own program, and I already deployed the image, but I can't find which three pins find that send PWM signal for ESC's? Please help.
|
I have a PTU system whose transfer function I need to determine. The unit receives a velocity and position, and move towards that position with the given velocity. What kind of test would one perform for determining the transfer function...
I know Matlab provides a method. The problem, though, is that I am bit confused on what kind of test I should perform, and how I should use Matlab to determine the transfer function.
The unit which is being used is a Flir PTU D48E
---> More about the system
The input to the system is pixel displacement of an object to the center of the frame. The controller I am using now converts pixel distances to angular distances multiplied by a gain $K_p$. This works fine. However, I can't seem to prove why that it works so well, I mean, I know servo motors cannot be modeled like that.
The controller is fed with angular displacement and its position now => added together give me angular position I have to go to.
The angular displacement is used as the speed it has to move with, since a huge displacement gives a huge velocity.
By updating both elements at different frequency I'm able to step down the velocity such that the overshoot gets minimized.
The problem here is: if I have to prove that the transfer function I found fits the system, I have to do tests somehow using the ident function in Matlab, and I'm quite unsure how to do that. I'm also a bit unsure whether the PTU already has a controller within it, since it moves so well, I mean, it's just simple math, so it makes no sense that I'll convert it like that.
|
What is the difference between Smoothing and Mapping (SAM) and Simultaneous Localization and Mapping (SLAM)? These general approaches seem closely related. Can someone describe the differences?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.