instruction
stringlengths 40
28.9k
|
---|
I bought some batteries for a project with the recommended voltage but too weak Ah.
The batteries are to be connected in parallel. I can't currently afford to upgrade both batteries, but I was curious if I can try upgrading one of them and connecting it in parallel to one of the ones I have now? My robot is going to weigh less than the one on which it is based, so perhaps just upgrading one is sufficient.
Or is that unsafe?
|
While stereo algorithms perform matching, they usually compare pixels within a region around the feature. I would like to know what effect the size of that search window has on the matching performance.
And what sort of features don't get detected if you go on increasing the size?
Thank you.
|
I came across RL to program robotic manipulators. But according to the website, currently supported websites are Ubuntu and Windows only. Since Ubuntu and Mac OS are based on UNIX kernel, is it possible to deploy RL on MacOS Xcode?
Link : http://www.roboticslibrary.org
Thank you.
|
as the title says, i want to calculate how much km/h is required to lift 1kg of weight.
Please explain this to me as simple as you can since i'm really noob in this.
Thanks.
|
I want to write my own kinematics library for my project in C++. I do understand that there are a handful of libraries like RL (Robotics Library) and ROS with inverse kinematics solvers. But for my dismay, these libraries DO NOT support MacOS platform. I have already written the Forward Kinematics part, which was quite straight forward. But for the Inverse Kinematics part, I am quite skeptical since the solution to a IK problem involves solving sets of non-linear simultaneous equation. I found out the Eigen/Unsupported 3.3 module has a APIs to non-linear equations. But before I begin on this uncertain path, I want to get some insight from you guys on the plausibility and practicality of writing my IK library. My manipulator design is rather simple with 4 DoF and the library will not be used for other designs of manipulators. So what I am trying to achieve is taylor-made IK library for my particular manipulator design than a rather a 'Universal' library.
So,
Am I simply trying to reinvent the wheel here by not exploring the already available libraries? If yes, please suggest examples of IK libraries for MacOS platform.
Has anyone written their own IK library? Is it a practical solution? Or is it rather complex problem which is not worth solving for a particular manipulator design?
Or should I just migrate all my project code (OpenCV) to a Linux environment and develop the code for IK in Linux using existing libraries?
|
Suppose you have a control card. We do not have access to the control card's code and we do not know which control algorithm is used (PID, PD, PI, Fuzzy, ...). We can only measure the control inputs we apply and the output signals the card generates. For examples a quadrotor card, inputs are yaw, pitch roller and throttle and the outputs are propeller pwm signals. Can we predict the algorithm and coefficients/properties used? Is there such a method or study? There are many ready to use control cards, we can imitate or improve for robot control. I think it is important to develop a method for extraction/prediction of algorithm on control board. Thank you for your help and your views.
|
I have Arduino and Matlab which has hardware support package for Arduino.
I want to create SPWM signal (sinusoidal pulse width modulation) to be the output of the Arduino board.
I could generate the signal required in Matlab using this code
function spwm = SinWave(frequency)
nsamples = 1250 * frequency;
t = linspace(0,1,nsamples);
sn = sin(2*pi*frequency*t);
st = sawtooth(2*pi*frequency*10*t);
spwm = abs(sn) > abs(st);
plot(t, sn);
hold on;
plot(t, st);
plot(t,spwm);
axis([0,1,-1.2,1.2]);
Now SPWM has the samples for the signal, I tried sending it on pin 13 using the following function
function writeSPWM(arduino, spwm)
for k=1:length(spwm)
writeDigitalPin(arduino, 'D13' ,spwm(k))
end
Then I used the following two lines in command window
a = arduino()
writeSPWM(a, SinWave(5))
I getting my signal shape with very low frequency(it is really much bigger)
Is there a better way to achieve my goal? using Matlab is necessary but I have no problem if I have to combine coding throw Matlab and Arduino C.
|
I am trying to perform uncertainty aware planning, where my planner tries to connect start and goal in such a way that the resultant path provides the least covariance at the end. This is inspired by techniques such as LQG.
The way I 'estimate' what covariance would result from a certain path is by using the EKF equations while assuming maximum likelihood observations, and I am trying to test what's called the 'light-dark' scenario that was used in many papers: where a 2D robot is traversing the environment, and there's a specific region where it would receive measurements that would reduce the covariance greatly. Hence, the uncertainty aware planner tries to take the robot to this 'light' area, receive good measurements, and then proceed to the goal. As seen in this picture from [1], the final covariance drops significantly by using this modified path, than the shortest path from start to goal (ignore the red line).
https://i.stack.imgur.com/dPYDC.jpg
I am trying to replicate similar behavior with my planner, which does result in covariance reduction compared to some other path that doesn't visit the good area, but the reduction isn't significant at all. In my sample environment, which is a 20x20 grid, the X coordinate of 17 represents the 'light' area, so I express the environmental noise as a matrix which is written as
$\begin{bmatrix} x-17+0.01 && 0 \\ 0 && x-17+0.01 \end{bmatrix}$,
, hence would get a (0.01,0.01) matrix whenever I'm precisely at the x=17 column in the grid. Problem is, my result looks something like this, with the covariance ellipses plotted in red (the span of which I obtain from the Eigen values of the matrix).
Although the robot does visit the good area thanks to the planner, my covariance still increases rapidly once I leave: so I'm guessing I am making a mistake in my EKF equations. This is how I am 'simulating' the covariance at coordinates x2, when stepping from x1 to x2 with P1 being the covariance at x1 (adapted from equations in some open source code).
function P2 = predictCovariance(P1, x1, x2)
H = eye(2);
u = x2-x1;
G = [u(1) 0 ; 0 u(2)];
Q = eye(2);
R = eye(2);
M = [(x2(1)-17 + 0.01) 0 ; 0 (x2(1)-17 + 0.01)];
P = P1 + G*Q*G';
S = H*P*H' + M*R*M';
K = (P*H')/S;
P2 = (eye(2)-K*H)*P;
end
[1] Van Den Berg, Jur, Sachin Patil, and Ron Alterovitz. "Motion planning under uncertainty using iterative local optimization in belief space." The International Journal of Robotics Research 31.11 (2012): 1263-1278.
|
I am doing a project, to draw images provided using robotic arm. Initially edge detection is done on image to be drawn and coordinate value is obtained as (x,y).
How to calculate 3 joint angles inverse kinematics from this value.
|
Does the Create 2 have firmware support (navigation, brush & fan control, ...) to support the vacuum functionality?
Can I use parts from my 655 or 805 Roomba to convert the Create 2 back to a vacuum?
|
I want to make a 4 legged robot like spider or dog, but I don't know how I can use kinematics to make it walk and run. I didn't find any resource to know how these types of robots walk and run by balancing their center of gravity balanced in each step and how they move their legs to move any direction. Right now I am trying to make a robot in gazebo simulation and test its working, only then I'll go for real hardware.
|
I am currently considering joining the MicroTransat challenge and develop an autonomous boat able to survive the harsh Ocean environment.
However, as for security and to ensure the boat doesn't sink due to collisions, I am looking for possible ways to implement a basic obstacle avoidance system. The prototype will harvest power from solar panels (and maybe a Stirling engine) and will run a Raspberry Pi as route planner.
Up to now I have come up with the following:
AIS receiver: can be useful to track ships above 300t and simple receivers don't use too much power;
Hydrophones array: may sense boat engines and the sound origin, very unreliable as a system;
Lidar/ultrasonic rangefinder: not suitable for marine environment;
Computer vision: difficult to implement but highly reliable, I guess sea waves will be a issue.
Ideally, the boat should keep away from other boats, floating objects and rocks, yet just ship avoidance would be very helpful.
Which one(s) do you believe can be included to offer a basic obstacle avoidance ability?
|
I have radar mounted on a car. For each detection, the radar returns these variables.
relative distance between the object/host vehicle in forward direction
standard deviation of the relative forward distance
relative distance between the object/host vehicle in left/right direction
standard deviation of the relative left/right distance
I'm trying to do coordinate transform of the above data, so that I get relative distance/standard deviation in global coordinates (North, South, East, West)
Distance is easy since it only requires to rotate the axis by the amount of angle between the vehicle body-fixed axis and the global axis.
How about standard deviation? How do I transform the standard deviation from the vehicle body-fixed axis to global axis?
|
Actually i am trying to write my own Flight Controller for Quad-Copter that is controlled by Remote over radio signals.So for a flight controller i have to buy a inertial measurement unit(IMU).So problem is that i visit a two different sites both selling a MPU6050 Triple Axis Accelerometer and Gyro.But both sites name it same but it's look different IMU unit that one is buying from hallroad.org.and that one is .is that same IMU or different if they are different what is difference b/w them?Which is best for Quad-Copter Flight-Controller?
|
I have a Chinese drone (cost less than $100), that works with a 2.4Ghz transmitter of 2.4. I am trying to build an application in C++ to control the drone with PC. I am thinking of using an ARDUINO and a NRF24L01 module. But I have not been able to establish communication with the drone. I don`t understanding the form of communication very well, is there a post, information, book, blog or idea that can help me?
|
I am developing a line follower robot and I am not sure about how many motors should I use: two or four. I'm thinking of using four, but I do not know if it's worth it (it will let the car be heavier, consume more power...). Does anyone have an idea? I'm planning in use something like this design here, of Aniki Hirai: http://anikinonikki.cocolog-nifty.com/.shared/image.html?/photos/uncategorized/2014/11/19/cartsix04.jpg.
The engine I'll use is a micro-metal motor, from Pololu, just like in the link:
https://www.pololu.com/product/3048.
I know the question is a little bit vague, but I don't know another way to ask this.
|
I am about to build my own quadcopter from scratch. However i am having problem with understanding how it is possible to control the quadcopter without knowing the current rpm of the BLDC motors. According to my understanding the rpm is needed to calculate thrust force etc. in the mathematical model, which will be used for regulation.
The ESCs I have seen have two wires to connect them to the flight controller. The first one is GND and the second is signal wire, which are used for sending PWM signal- no information about the motor speed. There is also the IMU unit, which provides information about the acceleration of the whole aircraft- but again no information about the motor speed.
I would be grateful if someone could explain it in details how this is.
|
In my program, I'm need to detect if the NXT touch sensor is pressed.
var nxt = new Brick<Sensor, Sensor, Sensor, Sensor>("usb");
nxt.Connection.Open();
nxt.Sensor1 = new TouchSensor();
nxt.Sensor1.Reset(false);
nxt.Sensor1.Initialize();
Console.WriteLine(nxt.Sensor1);
When I start the program, the sensor value always reads 0. But I discovered that if I go into the "View" menu in the NXT and see the touch sensor value, the program value reads 1. I can't do that for my setup. Also, I can't use Bluetooth; my computer doesn't have it. Can someone help me?
EDIT: my full code
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using MonoBrick.NXT;
using MonoBrick;
using System.Windows.Forms;
using System.Reflection;
using System.Threading;
namespace MonoBrick
{
class Program
{
[STAThread]
static void Main(string[] args)
{
try
{
var nxt = new Brick<Sensor, Sensor, Sensor, Sensor>("usb");
nxt.Connection.Open();
nxt.Sensor1 = new TouchSensor();
nxt.Sensor1.Reset(false);
nxt.Sensor1.Initialize();
Console.WriteLine(nxt.Sensor1.ReadAsString());
nxt.Beep(500);
System.Windows.Application application = new System.Windows.Application();
application.Run(new Window1());
nxt.Connection.Close();
}
catch (Exception e)
{
Console.WriteLine("Error: " + e.Message);
Console.WriteLine("Press any key to end...");
Console.ReadKey();
}
}
}
}
|
On the attached figure, I show a graphical representation of the problem I am facing. I have developed a humanoid robot with a thigh which makes an angle with respect to the leg. It means that there is always a constant distance (R) – whatever the rotation of the thigh is - between the Pelvic and the leg. Besides, the foot is articulated with a forefoot and a midfoot.
If I want to compute the orientation of the leg and thigh with respect to a given position and orientation of the Pelvic (represented by the point $C$) and the Foot (represented by $A$ the position of the ankle and $\vec{U}$ the orientation of the foot.
I come up with the geometrical problem of computing a plane $P$, passing through $C$ with one orientation given by $\vec{U}$, which is tangent to a sphere whose center is the extremity of the Pelvic (point $C$).
Knowing the point of tangency $T$ I can compute the position of the knee $K$ and then the angular values for the leg and the thigh.
But I cannot find the equations to solve it so far... or may be there is another geometrical solution I did not think of?
I am trying to find a geometrical answer before going through a DH description + finding the values via decoupling and so on...("classical" Ik resolution).
|
I am new to the MPC idea and I am trying to understand the key concepts but there are two things which I found confusing and I didn't find answers regarding to them.
The first one is about the optimized control signal sequence which can be computed from the cost function. If we want to predict, say five steps further, then we will have 5 control signals. After the calculation is done, we will apply only the first signal from the sequence to our system, and them remaining four will be "wasted" (I read that it can be used as an initial guess for the next optimization but thats not my point here). My first question is why don't we just predict one step further, and instead of optimizing five signals, we restrict our optimization to just one, which makes the computation faster?
The second question is about constraints. Lets say that I have some restriction on my input signal, say $0 < u < 5$. With some math, we can include these constraints to the optimization task but it takes more time to solve. Why don't we just do an unconstrainted optimization and after our input signals are ready we apply our contraints on them? Obviously this can not be done for state contraints, but i am interested about input constraints.
Thanks for your answers in advance.
|
I would appreciate your help for choosing the appropriate real-time SLAM algorithm.
I am interested in having autonomous robot, thus a SLAM component is needed for autonomous navigation.
The platform is Kobuki robot (Turtlebot2 without Kinect) which has wheel odometry.
To get a better localization, I want to use camera (Fisheye) - optionally looking upwards.
There is a lot of algorithms available online, most of them output 6DoF (do I need 6DoF for a wheeled robot? (x,y,theta) is enough, right?). I tried using several algorithms, but still didn't find the optimal one.
I believe that Loop-Closure is needed as well, because the final application would be to have a robot that coverage the area (visit each point once).
ROS is used to implement this robot, with small dev board (Intel Atom x5 processor).
Thanks in Advance
|
I am trying to write an EKF that can estimate the covariance of a pose estimate, where the estimation is being done by a PNP algorithm and 3D-2D correspondences in images. Although EKF based camera SLAM is pretty common, I've noticed that usually those techniques tend to integrate IMU data, as well as try to refine the map points along with the pose of the robot itself, thus considering both the map and localization somewhat unknown. But if I want to consider the map points as predefined and stable, and just want to compute the pose of a camera in 6 DOF, a relationship that is expressed simply between 3D and 2D points as:
$ \begin{bmatrix}
x \\ y \\ 1
\end{bmatrix}
=
K*\begin{bmatrix} R && t \end{bmatrix} * \begin{bmatrix} X \\ Y \\ Z \\ 1 \end{bmatrix} $
where $R$ and $t$ are the unknowns. This result can further be optimized by trying to minimize the reprojection error, but I am wondering how this can be reconstructed as an EKF measurement equation, and thereby estimate the state covariance, with my state containing 3D coordinates and the Euler angles of the camera pose as $[x, y, z, r, p, y]$.
|
Any ideas on ways to install a kill switch on the Create 2? I saw how the battery attaches to the main board with springs, so that blocks me from using a plain old battery switch. Are there any test points that can be grounded to shut down when in full mode?
|
I am aware that algorithm for monocular SLAM is complex compared to stereo SLAM. But my question is if by any means it is possible to do SLAM using one camera why one should use two cameras to do the same thing?
|
I've implemented a simple robot simulation based on the equations from EKF localization with known correspondence found in Probabilistic Robotics by Thrun et. al. Everything seems to be working but I noticed the covariance matrix has an odd behavior when doing prediction only.
When I move the robot with a forward velocity and some angular velocity and no correction the covariance grows (size of eigenvalues), as expected. But when I move the robot backwards with some angular velocity the covariance shrinks a bit before growing again. I expect the covariance to always increase if there is no correction.
I checked my implementation many times for errors but I now suspect this issue may be attributed to the fact that the Jacobians V and M uses signed values of velocity and angular velocity, instead of absolute?
Here is a video showing the covariance shrinking and growing. In this video the true pose of the robot is in green with an imaginary depth sensor shown as a green cone. There are no landmarks, so no correction step. The gray is the estimated pose plus covariance ellipse at 95% confidence interval.
https://www.youtube.com/watch?v=RcHkCijyG7c&feature=youtu.be
UPDATE:
I've attached a graph that illustrates this issue better. Two graphs are shown below. The graph in red is running the EKF prediction on noise free input and plotting the area of the ellipsoid of the covariance. The area (uncertainty) monotonically increases as expected. I repeat the same thing but at t=10 I inverted the velocity, resulting in the blue graph. There is an oscillation in the area for some reason yet to be determined.
The Octave script I wrote can be found here http://pastebin.com/rQyczVbm
|
I have a project that I'm working on that needs data about which directions something has moved and how quickly from a given point (accelerometer and magnetometer?). I have the working python code for the BNO055, I'm just having trouble interpreting it. I guess these are my specific questions:
Is the data from a starting point (doesn't seem likely) or a set time period that continues? If so, what is that time interval? Do I define it by how often I ask for the data?
Is magnetometer and accelerometer data what I should be using for my task?
What is a rough pseudocode outline for how I could convert this data into something easier to understand (i.e. an updating distance or x,y,[z] coord system)
|
I am taking a course on AI robotics from a computer science department but my background is in mechanical engineering. I am having some difficulty with ambiguous terminology in virtual potential fields. All of the sources I have seen to define the virtual potential fields with a physics basis:
$$F(q) = \nabla U(q) $$
or that the force imposed by the potential field is the gradient of the potential function.
Then, the CS sources I have seen will later set the velocity set point of the robot controller equal to:
$$\mathbf{q}_s = \nabla U(q)$$
essentially using the imposed force as a velocity setpoint. But none of the sources mention this swap. So am I missing something, or is the term virtual potential field sort of a misnomer? Maybe it should be virtual velocity potential field?
Here are some CS course slides I am looking at:
http://cs.gmu.edu/~kosecka/cs685/cs685-potential-fields.pdf
http://www.cs.cmu.edu/~motionplanning/lecture/Chap4-Potential-Field_howie.pdf
Thank you!
|
Can anyone give me a typical frequency that a driverless vehicle could generate steering and motor thrust commands?
I am trying to model a driverless vehicle in MatLab. Right now the vehicle is generating commands at 50Hz and the integrator is solving the dynamics at a 0.01s timestep (100Hz). I just want to know if this is realistic.
I would be even more grateful if you could point to a published source for this number.
|
I would like to know how to go about evaluating 3D occupancy grid maps.
I have a SLAM system that produces a 3D OGM (in .bt format using octomap/octovis)
I also have a ground truth OGM in same .bt format.
How do I compare the accuracy of my map to the ground truth map in a qualitative and quantitative way?
Important notes:
The two maps may not be the same scale.
One map may be less dense than the other.
One method I have thought about using is MRPT's occupancy grid matching application
This would require me to send both 3d maps as a message to the octomap_server node in ROS, get the resulting map in Rviz, save the image 2D image of each separately, and then somehow convert the images to MRPT's .simplemap file format, and then run MRPT's grid matching program on the two files.
Surely there is a better/more accurate way?
EDIT:
So I did more research and another route I could go is Matthew's Correlation Coefficient (MCC). I could compare two maps and iterate over each cell to compare my result to a ground truth, counting the True and False Positives and Negatives.
Only problem with this is that I have to assume that the two maps are the same scale, and also in the same orientation.
If you have any ideas on solving these scale and orientation issues don't be shy.
|
I am currently working on a self balancing robot project. I am going to use a MPU6050 to get data from both the accelerometer and the gyroscope. Since I need to get accurate data in a very small amount of data I need to filter the raw data I get. So many people have suggested me to use the Kalman Filter but I could not comprehend it (the maths behind it). Are there any other types of filters I can use in my project?
Thanks in advance.
|
I am trying to understand optimal control theory which forms the base for reinforcement learning techniques in AI. Whenever I open a lecture or a book or any online notes, everything starts with an ODE and then derivation goes the payoff function which is straight forward.
I am trying hard to comprehend why an ODE models any system? Many say it easy to begin with but why this model?
$dx/dt = f(x(t))$
I could not find the reason and decided to ask for help.
|
In humanoid robot, Which way we determine DH Parameter:
DH parameter for each leg separability.
Or, DH-Parameter for two leg together.
and if I can calculate from any way, what about trunk how can insert it in DH calculation?
|
The following code applies a force to the joint in an update method call. The problem is that the force seems to dissipate / is applied consecutively to other parts of the model, specifically the chassis, which holds the rotating laser. How can I circumvent that?
The chassis shold be a moving vehicle so I can't just fix it to the ground plane, using a fixed joint.
This is my onUpdate() from within the gazebo plugin. Essentially its making the joint rotate back and forth on a specified axis.
public: void OnUpdate(const common::UpdateInfo & /*_info*/){
rotation = this->joint->GetAngle(0);
this->joint->SetStopDissipation(0,0);
double degree = rotation.Degree();
if (degree <= -90) {
this->joint->SetForce(rotationAxis,effort*2);
}
else if (degree >= 90) {
this->joint->SetForce(rotationAxis,-effort*2);
}
std::cerr << degree << "\n"; }
The definition from the model.sdf is this:
<joint name="back_and_forth_joint" type="revolute">
<child>laser</child>
<parent>chassis</parent>
<axis>
<xyz>1 0 0</xyz>
<limit>
<lower>-1.57079633</lower>
<upper>1.57079633</upper>
</limit>
</axis>
</joint>
Thanks.
Update:
One of the possibilities is to simply add mass to the chassis like so:
<link name='chassis'>
<pose>0 0 .1 0 0 0</pose>
<inertial>
<mass>10</mass>
</inertial>
|
I'm working on a robotics project and I've got this idea for a self learning algorithm. I'm looking for some feedback on this, and specifically on whether this is a more common way of doing things.
I simply want to store a lot of numerical log data of previous actions the robot took and the results it got. I then want to let it search through the DB multiple times per second so that the robot can make decisions based on that and thus learn from it's actions (like humans do).
So I constantly log a lot of data. One simple log record could be for example:
speed: 5.43
altitude: 35.23
wind_speed: 6.19
direction: 27
current position: [12, -20]
desired position: [18, -25]
steering decision: 23
success: 12
The desired_position are coordinates on a 2 dimensional matrix and the success is how close it came to the desired position within 10 seconds (so the lower the better).
I then have a certain situation for which I want to find a comparable experience in the database. So let's say my current situation is this:
speed: 5.13
altitude: 35.98
wind_speed: 7.54
direction: 24
current position: [14, -22]
desired_position: [17, -22]
As you can see it doesn't have a steering decision or a success yet, because it still has to make a steering decision and only after taking that action and seeing the result the success rate can be calculated.
So I want to search for the record "closest" to my current situation within certain boundaries, and which has the best (lowest) success rate. So for example, the boundaries could be that the direction cannot be more than 10% difference, and the heading cannot be more than 15% difference. So I first make a selection based on that. I'm left with A LOT of records. I then calculate for every field in every record the percentage difference, and accumulate those per record so that I get some sort of “closeness factor”. Once that's done I combine the closeness with the success, order by that number, and take the top record to base my decision on for the action to be taken. I take the steering decision of that record and randomly change it to something which is within a certain change percentage range. I do this because people also experiment, you try something new every time. And as the system becomes better at steering the robot (success factors get better) I also reduce the randomization change percentage. This is because as people get better, they also know that they are closer to the objective and they don't need to experiment as much any more.
So to steer my robot I will run the following process as often as possible (I will try to run it 20 times per second) and use this system to steer my robot.
So my questions are;
Do you think that this can work?
Is this kind of thing used more often?
Does anybody know a database which would make it possible to query based on a dynamically calculated factor (the closeness factor)?
|
I have a robot with a 4 wheel drive using mecanum wheels to allow for more mobility. For the engineering documentation I am looking to find both the translational velocity and rotational velocity of the robot. To begin I started with the (incorrect) assumption that the tangential velocity of the wheels was the overall linear velocity, but that yielded unreasonably high values. What is the correct way to mathematically evaluate the translational and rotational velocity of the robot?
|
I try to find command to identify if call is received and when user press any button of number on their phone. I use arduino + sim900A + Ethernet. I try alot but still no see command to do that. Are there any way exit ? If you know, please help me.
|
I am currently coding a Forward and Inverse Kinematics solver for a PUMA 560 robot. For the Inverse Kinematics part I am using the closed for solution given in this paper. But my issue is, my solution for IK for a given set of (x,y,z) does not return the same values returned by my FK values. The reason I am doing this is to verify my code accurately computes the FK and IK.
These are the DH parameters for my robot (These are Python code, since I was testing my algorithm on Spyder IDE before implementing on C++).
DH Parameters
Link lengths
a = [0, 650, 0, 0, 0, 0]
Link offsets
d = [0, 190, 0, 600, 0, 125]
Link twist angle
alpha = [-pi/2, 0, pi/2, -pi/2, pi/2, 0]
So basically I finding the T transformation matrix for each link from the the base frame {B} to wrist frame {W}. This is my code;
Function to compute Forward Kinematics
def forwardK(q):
#T06 is the location of Wrist frame, {W}, relative to Base frame, {B}
T01 = genT(q[0],0,d[0],0)
T12 = genT(q[1],a[0],d[1],alpha[0])
T23 = genT(q[2],a[1],d[2],alpha[1])
T34 = genT(q[3],a[2],d[3],alpha[2])
T45 = genT(q[4],a[3],d[4],alpha[3])
T56 = genT(q[5],a[4],d[5],alpha[4])
#Tool frame {T}
#T67 = genT(0,0,d[5],0)
T03 = matmul(T01,T12,T23)
T36 = matmul(T34,T45,T56)
T06 = matmul(T03,T36)
#T07 = matmul(T06,T67)
x = T[0][3]
y = T[1][3]
z = T[2][3]
print("X: ",x)
print("Y: ",y)
print("Z: ",z,"\n")
print("T: ",T,"\n")
return T06
The function to compute T Matrix
def genT(theta, a, d, alpha):
T = array([[cos(theta), (-sin(theta)), 0, a],
[sin(theta)*cos(alpha), (cos(theta)*cos(alpha)), -sin(alpha), (- d*sin(alpha))],
[sin(theta)*sin(alpha), cos(theta)*sin(alpha), cos(alpha), cos(alpha)*d],
[0, 0, 0, 1]])
return T
from the T Matrix relating the {B} frame to the {W} frame position vector of the {w} [x y z] is extracted. R Matrix (orientation) of the {W} relative to the {B} is obtained by the following piece of code;
T = forwardK([30,-110,-30,0,0,0])
x = T[0][3]
y = T[1][3]
z = T[2][3]
R = T[0:3,0:3]
Where T is the transformation matrix relating {W} to {B}. Then this information is fed in to the invK(x,y,z,R,ARM,ELOBOW,WRIST) function to check if the algorithm returns the same set of angles fed to the forwardK(q1,q2,q3,q4,q5,q6) function.
In the invK(x,y,z,R,ARM,ELOBOW,WRIST)
- ARM, ELBOW, WRIST are orientation specifiers to describe various possible configurations of the manipulator. Each of these parameters are either {+1,-1}. These values are then used in the closed form geometrical solution presented by the afore-mentioned paper.
I did not post the code for the invK(x,y,z,R,ARM,ELOBOW,WRIST) since it is a direct implementation of the closed form solution presented in the paper and also it significantly long hence making it highly unreadable.
What do you think I am doing wrong? I am quite sure the way I am computing the FK is correct but I could be wrong. The matrix multiplications of my Python code are correct since I double checked them with Matlab. Any advice is appointed.
|
I've been thinking about making myself a fancy data glove. Now that I'm looking into it, I notice a lot of DIY stuff works with these so called flex sensor based on conductive carbon ink.
I'm not familiar with these sensors but from what I have learned so far, they are more expensive and less accurate compared to a simple strain gauge. Or are they just hard to use because of the length and bend radius of the finger?
So actually I'm just wondering what the pros and cons are of these sensors when faced with data gloves.
|
This article discusses how the robots sent to explore the Fukushima reactors have been damaged by radiation that exceeds 650 sieverts per hour:
http://www.theverge.com/2017/2/17/14652274/fukushima-nuclear-robot-power-plant-radiation-decomission-tepco
This is the seventh such robot that has died probing the reactor. I assume the robots are well-shielded....
What exactly is damaging the robots (alpha, beta, gamma particles) and how? Is the damage permanent?
|
I am trying to design a bot, it has to measure the temp I am using an 8051 so the adc is taking up a lot of i/o pins so I am thinking of switching it out with a thermostat or put another controller and interface them (which also has the advantage of letting me add more features) But I wanted to make sure in general is it a good design to add multiple micro-controllers on the same pcb, will it cause hardware problems?
|
I am using LSA-08 self calibrating sensor for line following purpose. But the problem with my sensor is it can't distinguish between faint blue color and white color (having RGB values 185, 217, 235). Is there any technique for distinguishing them ?
|
Inspecting the PWM set point mode it seems HSIN2 is the PWM input, and HSIN1 is the PWM direction input. Is it possible to reverse this, so that it's more consistent with step and direction mode wiring? I would like HSIN1 as the PWM input, and HSIN2 as the PWM direction.
Could the function of HSIN1, and HSIN2 be selected in Granity, once the setpoint type is chosen?
Most grateful for your help,
Mark
|
I've been working on a robot lamp.
Looking something like this:
So i had a few questions. I've decided to simplify the movements by giving the head a fixed orientation like you see in the picture. The lamp can move around it's own center, but those movements are directly relatable to the X Y Z position of the end effector.
If I have understood all I've read correctly, I'll be able to calculate the other angles, which are now all in the same 2d plane, just by using some geometry right? I've worked out these formula's already (haven't tested anything yet).
If i do want the head to also move around the same plane, the calculations would get A LOT harder and more complicated and there would be a lot of possible solutions, is that right? Or is there an easy step to get to this point when i already did it with the fixed orientation? And even if it would be manageable, would it be a good addition? Because I want the lamp to look at you, and the generated angles could make it look in a weird direction.
Is writing an algorithm the way to go, or would i be better of just hardcoding movements? What would be your tips if you were to make something like this, regarding movement?
Thank for reading and i hope someone can tell me if i'm on the right track here, thanks.
|
This is for a high school project that I'm doing. The sensor should be able to detect the movement of a moving inanimate object and convert into electrical energy, and be fairly cheap to buy.
I have looked into a few sensors but most of them seem to be geared towards detecting HUMAN movement, eg. PIR sensors. I am looking for the type of sensor that is able to detect non-human movement at close range (about 1/2 metres). Any suggestions?
|
I am making a bot with 4 motors with a stall current rating of 11.7A. I was using 4 separate motor controllers which has a peak current rating of 20A to control each motor. But when I gave power to the motors in my bot, the motor drivers blew off within a second.
Was it due to the starting peak current of the motors? If yes, then how can I reduce the starting current? If no, then what could be other possible problems and their corresponding problems?
|
For stereo cameras on the market, two cameras are always mounted side by side and with a displacement that is perpendicular to the cameras’ optical axes. I take this setup for granted. One idea came to my mind whether this is necessary? If two cameras are not parallel and have different focal length, camera calibration can correct the difference. Why are two cameras mounted in parallel? My guess is that two cameras can have a large overlapping region. Am I correct?
|
When using Hidden Markov Models in Global Localization problems on the prediction step there is a need to calculate the probability of robot's pose given the actions (control u, odometry):
p(Xt|Xt−1, Ut−1)
where xt-1 and ut-1 are the robot's previous pose and control.
There are different tutorials (this one for instance) and articles on the web with the examples, but most of these examples are for the 1D localization problem where action simply equals to 1 if the odometry is perfect.
What if I am considering 2D space?
For example in 1D example for time step T=1 I would compute like:
p(Xt = 2 | Xt-1 = 1, Ut = 1) = 1
How should I do the computations for 2D case?
|
I am looking for a high speed usb webcam.
The plan is to rotate the camera at about 20-30 revolutions per second, the problem is this requires me to have a framerate of about 120fps (depending on the lense...) and secondary the exposure time needs to be really short for the image not to be blurred (and still have a decent quality).
At the moment i am using a 120fps USB-Webcam by ELP but the results are not satisfiying (the image is used by a computer-vision algorithm which needs a more or less sharp image).
Is there any camera available which can achieve the desired results (and is quite small and lightweight). Money is not our primary concern.
|
'm trying to detect all the post-it in an image and get them into an arrayList. I tried many alternatives (removing backgroud -> detecting contours, haar Cascade
classifier, detecting rectangular objects...) but none of them gave me good results.
Any Idea how to proceed? Any help will be appreciated.
|
I build 3d models using Agisoft and need to record the position of the Ground Control Points as accurately as possible, latitude , longitude and altitude. What is the best way of doing this?
|
I am planning to build robot like irobot Roomba. So cleaning in a spiral pattern is required like the image shown starting from the center:
This code is part of my full code which doesn't give me spiral pattern:
void spiralling() {
for(int i=0;i<=2;i++) {
digitalWrite(motor1,HIGH);
digitalWrite(motor2,LOW);
digitalWrite(motor3,HIGH);
digitalWrite(motor4,LOW);
analogWrite(pwm1,180);
analogWrite(pwm2,80);
delay(300);
p=1;
}
analogWrite(pwm2,250);
delay(150);
}
So my question is how can I make my bot trace spiral pattern (algorithm?,Logic? to use), as the only way to change the direction is with two wheels on sides.
Is there any code which constantly increases the radius of bot movement from center to radially outwards?
My robot has arduino uno ,l293d motordriver,two geared motors on either sides as shown in image and castor wheel in front:
|
What exactly does a motor driver do? Why do we need an additional power supply for motors? I'm a hobbyist and making a line follower from scratch.
|
I am trying to help my son program his robot using RobitC graphical. He has it doing most of what he wants it to do but the code is stuck and won't continue past the arm raise/ hold. What it is supposed to be doing at the point where it gets stuck is raise the arms (to lift an object) Hold the arms up while reversing Turn 90 degrees Reverse (This is as far as he's created as he realized it isn't working) Turn 90 degrees Reverse to the basket and raise the arms again
His code is trying to say to hold the arms up until he creates a new set point
He has tried both of these codes
I know it says (left drive) instead of (arm) on one bit that was corrected and still doesn't work correctly
I was asked in the comments on my other question to post which line it fails. I believe it fails at 18. The arms stay up but the bot does not continue the program (reverse). I could not respond in the comments as I do not have a high enough rating.
|
I am having a hard time understanding how to relate forward kinematics to things one can control in a robot like PWM and reading encoder values.
For example, how to relate the encoder values from motors and RPM to PWM values of the motors to make a robot follow a curved path?
|
I'm using simulink to get joints torque by giving motion input. I create very simple CAD model in solidworks to learn simulink. Simulink model is showing in below figure.
During update it shows no error but when I Run it shows error as
3D-CAD model is shown below figure.
Can anybody help me to solve this problem.
Thanks.
|
I am struggling to find any clear documentation on how to create MAVlink commands using python.
I am looking to create an autonomous glider and require some of the basic functions
Retrieve GPS Data and store into a file
Import GPS data in Python function
Send waypoint lat, long, alt to the autopilot (APM)
I am currently using MAVProxy as my GCS and the ErleBrain 3 as my autopilot hardware but the aim is to not require a GCS and just have a python script to automatically add and remove waypoints based on the received GPS data
|
I bought a little toy helicopter, namely the Revell Control 23982. After flying a few batteries I was wondering whether I could hack it in such a way, that my Arduino Uno can control/manipulate the signals from the transmitter to the receiver on board. However, to me it seems like I have big trouble getting started on the right path.
Can anyone spot out my mistakes?
Hardware hack
My first attempt was to bridge the potentiometers in the transmitter with the Arduino, this did not work at all. I think this is because the Arduino is using PWM and no true DC. Also I do not understand how the potentiometer (shown in the picture) works, where two of the three terminals are connected to each other.
NOTE: The soldered cable is a result of some unsuccessful hardware soldering.
EDIT
Here is further information according to @combos comment.
NOTE: The diagram is missing a connection between the line that connects all potentiometers and a 3v pin on the IC. Sorry for that.
This is a simple excerpt of the actual PCB, however, it should be the one which is important for my question. It is true that two terminals of the potentiometers are connected, and all the potentiometers are additionally soldered to ground via their housing. I was not able to find any information about the IC labelled "???". It contains the transmitter, that is clear.
A test with my multi-meter showed, that there is a maximum current flow of three volts on the single line of each potentiometer, and the connection between all of them is 3v consistent.
Software hack
My second attempt was trying to reverse engineer the 2.4GHz transmission via an nrf24l01 module, as some other people on the internet have been successful doing this with some other toys. I tried to scan the frequency bands, however with no successful outcome. I have no clue which transmitter module is being used on the board.
TL;/DR;
My questions:
Is it even possible to achieve what I want?
If yes, what do I need to do?
If not, what did others do to achieve this kind of behaviour?
Which circuit do I need to convert digital PWM in analog?
If you need any additional information please let me know!
|
I've been looking into path planning for a non-holonomic robot with 3 DOF in a 2D plane and recently learned about Voronoi diagrams but I cannot find any open source planning libraries that use this technique. Are there any open source implementations of using Voronoi diagrams (preferably in C++)? If yes, then where? If not, then why?
|
Where I can buy an actuator that consists of a motor that turns a screw that moves a rod?
|
I'm studying an optimal control for the inverted pendulum in the following figure
the state and the output of the system are defined as
$$x=\begin{bmatrix}r & \theta & \dot{r} & \dot{\theta} \end{bmatrix}^T, \quad y=\begin{bmatrix}r & \theta-\alpha & \dot{r} \end{bmatrix}^T$$
so the continuous state space model at the upright equilibrium is
\begin{cases}
\dot{x}(t)&=Ax(t)+B_u u(t)+B_\alpha \alpha(t)+B_\tau \tau(t)+B_{F_s}F_s\text{sign}(\dot{r}(t)) \\
y(t)&=Cx(t)+D_\alpha\alpha(t)
\end{cases}
where the disturbance inclination $\alpha$ is suppose constant, $\tau$ is a torque disturbance,$B_{F_s}F_s\text{sign}(\dot{r}(t))$ is a Coulomb's friction term and $u$ is the input force applied at the cart. The numerical values for the transfer matrices are
$$A=\begin{bmatrix}0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 2.8040 & -5.2658 & 0 \\ 0 & 18.5885 & -19.6959 & 0 \end{bmatrix}, \quad B_u=\begin{bmatrix}0 \\ 0 \\ 3.7629 \\ 5.5257 \end{bmatrix}, \quad B_\alpha=\begin{bmatrix}0 \\ 0 \\ -12.6447 \\ 18.5885 \end{bmatrix}, \quad B_\tau=\begin{bmatrix}0 \\ 0 \\ 0.5650 \\ 3.7629 \end{bmatrix}, \quad B_{F_s}=\begin{bmatrix}0 \\ 0 \\ -1.1187 \\ -0.5650\end{bmatrix} $$
$$C=\begin{bmatrix} I_3 & 0_{3\times1}\end{bmatrix} ,\qquad D_\alpha=\begin{bmatrix}0 & -1 & 0\end{bmatrix}^T$$
where $I_n$ is the identity matrix $n\times n$ and $0_{m\times n}$ is the null matrix $m\times n$.
For a digital implementation is required a suitable sampling of the previous system, so the discrete relative state-space model is in the form
\begin{cases}
x_{k+1}&=\Phi x_k+\Gamma_u u_k+\Gamma_\alpha \alpha_k+\Gamma_\tau \tau_k+N_k(\dot{r}_k) \\
y_k&=Cx_k+D_\alpha\alpha_k
\end{cases}
Now there is my problem. To counteract the effects of constant rail inclinations a discrete-time integrator is appended to the model $(\Phi,\Gamma_u)$. It is taken in the simple form
\begin{equation}\tag{1} w_{k+1}=w_k+r_k\end{equation}
so the extended state of the system become
$$x^{\text{e}}=\begin{bmatrix}x & w\end{bmatrix}^T$$
and the control state feedback is designed by minimization the cost function
$$J(u)=\sum_{k=0}^\infty (x_k^\text{e})^TQx_k^\text{e}+Ru_k^2$$
where the cost's matrices $Q$ must be semi-defined positive and the scalar cost $R$ must be strictly positive.
I can't understand the function of the integrator $(1)$. The rail inclination affect both input and output of the system, so the integral action counteracts only the effect on output. Moreover, the state matrix $A$ is singular, so the system at least has one integral action by itself, and no more integrator has to appended to the system.
Maybe the integrator $(1)$ is considerer for drive the position of the cart to the start of the rail, i.e. $r=0$.
Thanks in advance for any suggestion.
|
I have a robot that moves around autonomously. Very often I want to push the robot several feet to start a test over again, or sometimes I want to wheel the robot outside to my car or to a nearby field.
Pushing my robot is pretty tough. Its 60lbs (27kg) and when I push it with the motors engaged its very difficult. I want a way to decouple the 2 back wheels so I can haul it around like a suit case. I've seen similar posts to this one where people suggested just leaving them coupled and recharging the battery. I don't really care to exert that much of my own energy just to recharge the battery. I just want to make transporting easier.
I also Want to be able to engage and disengage without the robot being on. This means that electromagnet clutches are out. Id like the solution to be under $300 and to be fairly easy to machine. I have access to a lathe, mill, welder... etc and can machine some complex stuff, but I don't want it to be like making a custom gear box.
Does anyone have any suggestions? I'm using standard dolly wheels from Northern tool. (http://www.northerntool.com/images/product/2000x2000/425/42570_2000x2000.jpg) I've considered a quick release pin but given the geometry of the dolly wheel that would be tough. The pin would hit the rim as you are pulling it out.
Thanks in advance for any suggestions.
|
I am trying to calculate the angular velocity of end effector of two-link robot arm. Can anyone help me to find it?
If $q_1$, $q_2$ are joint angular position and $\dot{q_1}$ and $\dot{q_2}$ are joint angular velocities, and $\omega$ is angular velocity of end effector, then I use $\omega=\dot{q_1}+\dot{q_2}$.
Is that correct?
|
I've recently tried to get the remote ROS node on a mobile robot through wifi. This is as below picture.
I've run roscore command on both laptop and mobile robot and when I run the roscore command, I get the following warning message on my laptop server:
WARNING: ROS_MASTER_URI [http://192.168.7.2:11311] host is not set to this machine
auto starting new master
process[master]: started with pid[2719]
ROS_MASTER_URI=http://192.168.7.2:11311/
setting /run_id to 4ef6c0f8-bfdf-11d3-a450-4e699f75a6e7
process[rosout-1]:started with pid[2732]
started core service[/rosout]
But, when I run rosparam list command on my laptop, I've got the following results:
root@duminda-laptop:~#rosparam list
/rosdistro
/roslaunch/uris/host_192_168_7_2_35078
/rosversion
/run_id
root@duminda-laptop:~#rosparam get /rosdistro
hydro
root@duminda-laptop:~#rosparam get /roslaunch/uris/host_192_168_7_2_35078
http://192.168.7.2:35078
root@duminda-laptop:~#rosparam get /run_id
4ef6c0f8-bfdf-11d3-a450-4e699f75a6e7
These results show that both laptop and mobile robot connected ok.
Is my setup to receiving the mobile robots data to my laptop is ok?
Why there is a warning message like above when starting the roscore?
|
I am doing robotic arm simulation in Matlab Simulink but get the error. How I can resolve this.
My model given below.
Other details:
1. image of coordinates with world coordinates.
2. 6 DOF joint setting
|
I have implemented a stable monocular slam tracking system, based on ORBSLAM2. I am trying to find a way to add real-world distance/scale to this.
At the same time, I am running a (less stable) stereo slam system.
What i would like to do is:
Using the correct scale of the stereo data, figure out the scale factor that is need to correct the mono to real-world scale, and multiply the output by this factor.
I do not want to adjust the slam algorithm, I need to take the two output streams (currently an Eigen::Vector3f for each stream) and find an algorithm that I can run for 30 seconds or so to find the scale difference, and therefore the factor that I need to multiply the mono by.
I am currently:
//Start system.
//Store both values as Eigen::Vector3f;
Eigen::Vector3f initialWrong;
Eigen::Vector3f initialTruth;
//sleep the thread for a few seconds, and move the sensors.
// get the distances
float stopWrong = CurrentWrong - initialWrong;
float stopTruth = CurrentTruth - initialTruth;
scaleFactor = stopTruth / stopWrong ;
This works, within a margin of error, but is very manual. I am looking for a more automatic / iterative way to do this that will minimize the error as much as possible.
How can I use a ground truth to scale monocular slam in real time?
Any thought or tips here would be greatly appreciated.
Thanks.
|
I am currently working with a combination of Tamiya and T-connectors. I have not previously had issues with either, but the T-connectors I'm using currently do not seem to be maintaining a connection. If I torque the T-connectors in a particular way they will start working, but if I leave them "floating" they disengage. My first thought was that the problem was I had 2 different brands of T connectors, but it turns out, after switching all to one brand, that some of them do not work properly.
My first question is if there are known problems with T connectors or if I probably just got a bad batch?
My second question is if there are better interconnects out there than either Tamiya or T? (specifically for wiring a motor to its driver and the battery to the driver)
Let me be clear, I am NOT looking for opinions. I obviously realize this could be based on personal preference, but I am specifically asking if there are engineering principles at play in the choice being made. If it really just comes down to preference, then I am only interested in the first question. I also realize the possibility of avoiding interconnects, but let's assume that's not an option.
|
I would like to get a rough estimate of the depth accuracy / uncertainty of a stereo camera system. For this I would like to use the basic formula in the attached image. What is still unclear to me is what's a reasonable choice for the disparity error $\Delta$D. Unfortunately, for example this answer (in point "Resolution") describes the assumption of the size of the disparity error only very quickly.
Is a reasonable assumption just the width of one pixel? Why yes, why not?
|
I'm trying to understand the dynamic equations for the cart pole system according to this control tutorial from the University of Michigan, where the angular acceleration equation can be written as
$(I+ml^2)\ddot{\theta}+mglsin(\theta)+ml\ddot{x}cos(\theta)=0$
I'm having trouble understanding what forces are represented separately by the terms $ml^2\ddot{\theta}$ and $I\ddot{\theta}$. Both seem to represent a moment force exerted by the mass of the pole, so they almost seem to represent exactly the same thing. My intuition tells me that I really only need the term $ml^2\ddot{\theta}$, but the fact that there is an additional $I\ddot{\theta}$ term suggests that my intuition is not accounting for some other force present in the system.
Any help to clarify and distinguish the meaning behind these two terms would be greately appreciated!
|
I want to know which motors or smart servos are used in boston dynamics spot robot. Where can i read its spicification and is it possible to buy the motor?
|
I am making a hexapod project with my friend. There is a issue about its walking style. Generally tripod gait is formed as two steps: first move legs 1,3,5, then 2,4,6 which corresponds to moving one leg from one side and two legs from other side but the legs are located circular instead side by side. So, this situation makes us wondering if it would rotate or not instead of moving forward when the
tripod gait is implemented.
Our robot's legs are like this one: https://www.youtube.com/watch?v=iPdRUbJcNzM
What kind of walking algorithm is suitable for that kind of robot?
Thanks in advance.
|
I am planning to use this motor: http://www.adrirobot.it/robot_kit/feetech_FT-DC-002/TGP01D-A130_TT-Motor.pdf Model: TGP01DA130 12215-48.
But I have some questions: How much a robot will be slower if it has 1 motor and one 180 degree servo instead of just 2 motors? And which one is the best option?
|
The equations of motion for a cart-pole (inverted pendulum) system are given as
$$(I+ml^2)\ddot{\theta}+mglsin(\theta)+ml\ddot{x}cos(\theta)=0$$
$$(M+m)\ddot{x}+ml\ddot{\theta}cos(\theta)-ml\dot{\theta}^2sin(\theta)=F$$
However, I want to model a two wheeled segway-like robot with motion constrained to only forward and backward actions (effectively restricting motion in 1D). I initially thought that I could model such a constrained segway robot by modeling a cart-pole system with a massless cart (M=0). Is this the right approach to model the dynamics of a 1D segway robot? Or is there a better model for the dynamics of such a robot in 1D?
|
I want to implement a FPGA-based real-time stereo vision system for long range (up to 100m) depth estimation. Also, I want to use two ip cameras in the system. I have calculated depth error using the equation below with these parameteres:
baseline = 1m (at the expense of increased frontal blind zone area), z = 100m, f = 4mm, pixel_size = 4um, disparity_accuracy = 0.25px,
depth_error = dz = ((z^2)/b.f)*dp = 2.5m
My questions are: Are these calculations and equation above valid for long range stereo vision?
Are there any other considerations for designing this long range stereo vision system which may not be important in typical (short range) stereo vision systems?
I will be grateful for any information you can provide.
|
The dynamics equation is T=M(q)*q''+C(q,q')*q'+G(q).
Can somebody provide me with the M(q), C(q,q') and G(q) matrices of a 3R manipulator with link mass,length and rotational inertia m_i, l_i and Inertia_i respectively?
|
How can I calculate the rigid-body transformation [R|t] between two 3d triangles, but restricted to a given N degrees of freedom (for N = 1..6) ?
I know for N=6 I can get a least-squares solution via SVD of a certain matrix, but how can I integrate further constraints (fewer DOF) into the system?
|
I am coding a little robot controlled by a raspberry pi zero.
Disclaimer: This is a more general development question, and only indirectly related to raspberry pi.
Background: I don't want to test everything on the robot directly, this is too time consuming. Therefore I am trying to implement mock interfaces. For example, I'd am using a set of fake sensor readings and feed them to the class responsible for sensing and driving. Running this code in command line should for example print out "driving left" instead of starting the right motor and so on.
Question: How to mock a device so that I can use the main code without modification on my robot.
Code examples
I have currently the following file structure:
main.py: running the loop so the robot is doing something and currently holding mock sensor data.
acc.py: AutonomousCruiseControl class: taking care of the process of measuring and steering
hcsr04.py: a class mocking the sonar distance reader device
servo.py: currently doing nothing: Should contain a mock device writing to console what motor is turned into what direction
In the following code examples, you can see e.g. in the drive() method, that I have code purely for testing mixed with the code destined for production.
main.py
#!/usr/bin/env python3
# coding=UTF-8
from navigation import AutonomousCruiseControl
def main():
#self.objectColisionRange = [10,15,20,15,10]
readings = [
[11, 16, 21, 16, 11], # path free
[100, 100, 19, 100, 100], # front blocked
[9, 100,20, 100, 100], # center & left blocked
[100, 100, 10, 100, 10], # center & right blocked
[10, 100, 10, 100, 10], # center, left & right blocked
[100, 10, 100, 100, 100], # center free, left front object to avoid
[100, 100, 100, 10, 100], # center free, right front object to avoid
[100, 10, 100, 10, 100] # center free, but left and right (45°) objects too narrow
]
ACC = AutonomousCruiseControl()
# range second parameter is numbers to generate not the right boundary
for i in range(0,8):
ACC.front_sonar.readings = readings[i]
print("{}. reading: {}".format(i, ACC.read_front_sonar()))
print("{}. FOS: {}".format(i, ACC.get_front_object_status()))
ACC.drive()
if __name__ == '__main__':
main()
acc.py
#!/usr/bin/env python3
# coding=UTF-8
from navigation.hcsr04 import Hcsr04
from navigation.servo import Servo
from random import randint
class AutonomousCruiseControl:
"""helping to steer a robot trough obstacles"""
def __init__(self, object_colision_range=None, front_sonar_angles=None):
# initiate sensors
self.front_sonar = Hcsr04()
self.front_sonar_servo = Servo()
self.collision_distance = 20
if object_colision_range is None:
self.object_colision_range = [10, 15, 20, 15, 10]
else:
self.object_colision_range = object_colision_range
if front_sonar_angles is None:
self.front_sonar_angles = [0, 45, 90, 135, 180]
else:
self.front_sonar_angles = front_sonar_angles
self.front_sonar_distances = [200, 200, 200, 200, 200]
self.read_front_sonar()
print(self.front_sonar_distances)
self.front_object_status = [0, 0, 0, 0, 0]
def read_front_sonar(self):
""" Setting the angle and calling the read_distance method. """
# TODO implement reading from right to left and left to right, so both servo swings can be used
# go through all angles and read distance put it into front_sonar_distances
# initiate only once for test cases, to not mix values
for i in range(0, len(self.front_sonar_angles)):
# set servo to angle
angle = self.front_sonar_angles[i]
# read distance
distance = self.read_distance(self.front_sonar, angle)
# print("distance is {}\n".format(distance))
self.front_sonar_distances[i] = distance
return self.front_sonar_distances
def get_front_object_status(self):
""" Creating a simplified form of the distance measures. Currently if a reading is equal or below the
reading angle specific collision range, the status is set to 1. Zero means, no obstacle. """
for i in range(0, len(self.front_sonar_distances)):
if self.front_sonar_distances[i] <= self.object_colision_range[i]:
self.front_object_status[i] = 1
else:
self.front_object_status[i] = 0
return self.front_object_status
def read_distance(self, sensor, angle):
""" Remove angle later from this method and from hcsr04. """
if sensor is self.front_sonar:
# TODO: implement servo, set angle
# print("Set servo to angle {} °".format(angle))
# angle/index won't be necessary after the real servo routine is working
# this just helps the dry run tests
index = self.front_sonar_angles.index(angle)
return self.front_sonar.get_distance(index)
def drive(self):
""" Contolling the motors with obstacle avoidance switch.
Light search is not yet implemented. The robot should aimlessly
cruise around without hitting detectable obstacles. """
stat = self.front_object_status
dist = self.front_sonar_distances
if stat[1] + stat[2] + stat[3] is 0:
# road is free
print("Road free: L(eft) & R(ight) forward")
elif stat[1] is 1:
# left blocked
print("Obstacle left: L(eft) faster & R(ight) slower forward")
elif stat[3] is 1:
# right blocked
print("Obstacle right: L(eft) slower & R(ight) faster forward")
elif stat[2] is 1 or (stat[1] + stat[3] is 2):
# front blocked
print("obstacle front: stop all")
dist_sum_left = dist[0] + dist[1]
dist_sum_right = dist[3] + dist[4]
if dist_sum_left == dist_sum_right:
# both sides are free and distance the same
# choose randomly left and right 0 is left and 1 is right
rand = randint(0, 1)
if rand is 0: # left
print("Random turn left, L backw, R forw")
else: # right
print("Random turn right, R backw, L forw")
elif dist_sum_left > dist_sum_right:
# left has more leeway
print("Left leeway, turn left")
elif dist_sum_left < dist_sum_right:
# right has more leeway
print("Right leeway, turn right")
hcsr04.py
#!/usr/bin/env python3
# coding=UTF-8
# Test method
class Hcsr04:
'this just fakes the reading from sonar it needs the servo angle where the reading should come from'
def __init__(self):
self.readings = [88, 88, 88, 88, 88]
def get_distance(self, index):
return self.readings[index]
|
I am currently in the process of implementing a Fanuc robotic arm at our company and have just received the pneumatic gripper end effector to go on the end of the arm (which we won't have for about a week).
The end effector has come with both NPN and PNP sensors which attach into machined slots in the side of the gripper.
I have very little experience or understanding of electronics so my question is really, what is the purpose of these sensors when used in conjunction with an end effector. Do they enable and disable the gripper or is that done through controlling the air supply?
Many thanks for any help!
|
If I can control a small hobby servo motor then can I control high torque servo motor in same way? I want to make a robot which needs a high torque and speed motors. But first before buying some industrial servo motors I want to first test the kinematics of my robot with small cheap servo motors. I will be using ros-orocos toolchain to control the robot and to make an efficient motion planning algorithm. Why I can't go for expensive servo motors right now is that first I want to test the working of the robot arm, whether it moves as expected or not. Although I think it is possible, I want to be sure.
|
I am currently thinking about making a robot that will autonomously drive around the place. The place I want this robot to drive in however contains quite a few glass walls. When mapping the area I would need to be able to see the glass. For this reason I am in need of a sensor that can see the glass, and not see through it. What kind of sensor would be the best for me? I need it to have a maximum range of about 2-10 meters and a minimum range of about 0.25 meters (preferably as small as possible). I was thinking about maybe using ultrasonic, but I was told that a laser-based sensor would probably be best. I could however only find industrial grade laser sensors that could see glass/transparent objects.
|
I'm building a kite flying robot, in which I've already got some sensors built in. I now also want to measure the pulling force on the rope which is attached to the kite. So I thought of attaching a hanging digital scale to the rope which can then measure the pulling force in kilograms. I of course need to read out the data from the scale using GPIO pins or USB.
Furthermore, I also have a raspberry pi installed within the kite and I want to measure some pull on ropes in the air. So if possible, I also need very small/light scales from which I can read out the data. It would be great if the scales can measure up to about 50kg, but with some pulleys about 20kg would be fine as well.
Unfortunately I can't find any digital hanging scale with a possibility to read out the data and which is reasonably light. There are some simple USB powered hanging scales, but I think they just use USB for charging, not for reading out data. And I also found this one, but that's a bit overkill (too heavy and too expensive).
So question #1: Does anybody know where I can get a simple existing hanging scale from which I can read out data?
If needed I can also build something, but I just wouldn't know where to start. I did find this page on Alibaba, where they show the contents of the scale they offer:
So as far as I understand I need the component which I highlighted. But I have no idea how that component is called (what do I search for?) and whether it is actually doable to read it out from a Raspberry Pi.
Question #2: does anybody know what the highlighted component is called, where I could possibly get it and if I can read it out from a Raspberry Pi?
In conclusion; can anybody point me in the right direction?
|
I'm adapting a KF orientation filter that represents the orientation as quaternion, and uses a 3x3 covariance matrix. Would someone know what 3x3 covariance represents in the case of a quaternion, and how that representation would relate to euler angles ?
|
I want to make a matlab (simulink) control model for the system in the image below.
The original pdf is only accessible if logged in Carleton's Learning Management System.
How do I get the dynamics of the system with given details in the image?
|
I have a motoman robot for use in a pick and place application. It has a DX100 controller which has an ethernet interface which could be used to control a slave device using the Modbus TCP protocol.
The DX100 controller also supports Ethernet/IP and DeviceNet.
I know Modbus can be quite complex for first timers and I have little experience when it comes to programming these devices.
I would like to know, if someone here has ever worked with this controller, which communication protocol they used and why.
|
I was reading the encoder value in arduino uno. But the output is not comming properly. It is coming like this:
I am using this arduino code to read encoder:
/* Rotary encoder read example */
#define ENC_A 14
#define ENC_B 15
#define ENC_PORT PINC
void setup()
{
/* Setup encoder pins as inputs */
pinMode(ENC_A, INPUT);
digitalWrite(ENC_A, HIGH);
pinMode(ENC_B, INPUT);
digitalWrite(ENC_B, HIGH);
Serial.begin (115200);
Serial.println("Start");
}
void loop()
{
static uint8_t counter = 0; //this variable will be changed by encoder input
int8_t tmpdata;
/**/
tmpdata = read_encoder();
if( tmpdata ) {
Serial.print("Counter value: ");
Serial.print(counter, DEC);
counter += tmpdata;
}
}
/* returns change in encoder state (-1,0,1) */
int8_t read_encoder()
{
static int8_t enc_states[] = {0,-1,1,0,1,0,0,-1,-1,0,0,1,0,1,-1,0};
static uint8_t old_AB = 0;
/**/
old_AB <<= 2; //remember previous state
old_AB |= ( ENC_PORT & 0x03 ); //add current state
return ( enc_states[( old_AB & 0x0f )]);
}
|
I'm working through the Inverse Kinematic example for the Unimation PUMA 560 from Introduction to Robotics by Craig. In it he specifies the IK equations like so:
In my software program I have three sliders on the screen that will give me the rotation of the end point in x, y, z like so (this is in Unity):
Each one of these sliders will control a float variable in the code (C#) and I can read these into my script (Using Unity 5). I am trying to replicate the inverse kinematic solution for this PUMA robot inside Unity, so that for a given position and rotation of the end effector the link rotations will update accordingly. I have already written out the IK equations that Craig specified in the example to calculate theta(i), but how do I "read" the slider values and "input" them to these equations? If I am not making any sense I apologize, I have been chipping away at this for some time and hit a mental blank wall. Any advice appreciated.
Edit: So in my near-delirious state I have not posited my question properly. So far, these are the equations I have written so far in code:
public class PUMA_IK : MonoBehaviour {
GameObject J1, J2, J3, J4, J5, J6;
public Vector3 J2J3_diff, J3J4_diff;
public Slider px_Slider;
public Slider py_Slider;
public Slider pz_Slider;
public Slider rx_Slider;
public Slider ry_Slider;
public Slider rz_Slider;
public float Posx, Posy, Posz, Rotx, Roty, Rotz;
float a1, a2, a3, a4, a5, a6; //Joint twist
float r1, r2, r3, r4, r5, r6; //Mutual perpendicular length
float d1, d2, d3, d4, d5, d6; //Link offset
public float t1, t2, t23, t3, t4, t5, t6; //Joint angle of rotation
public float J1Rot, J2Rot, J3Rot, J4Rot, J5Rot, J6Rot;
float r11, r21, r31, r12, r22, r32, r13, r23, r33, c23, s23, Px, Py, Pz, phi, rho, K;
int pose; //1 - left hand, 2 = right hand
// Use this for initialization
void Start ()
{
pose = 1;
J1 = GameObject.FindGameObjectWithTag("J1");
J2 = GameObject.FindGameObjectWithTag("J2");
J3 = GameObject.FindGameObjectWithTag("J3");
J4 = GameObject.FindGameObjectWithTag("J4");
J5 = GameObject.FindGameObjectWithTag("J5");
J6 = GameObject.FindGameObjectWithTag("J6");
J2J3_diff = J3.transform.position - J2.transform.position;
J3J4_diff = J4.transform.position - J3.transform.position;
//Init modified DH parameters
//Joint twist
a1 = 0;
a2 = -90;
a3 = 0;
a4 = -90;
a5 = 90;
a6 = -90;
//Link length
r1 = 0;
r2 = Mathf.Abs(J2J3_diff.x);
r3 = Mathf.Abs(J3J4_diff.x);
r4 = 0;
r5 = 0;
r6 = 0;
//Link offset
d1 = 0;
d2 = 0;
d3 = Mathf.Abs(J2J3_diff.z);
d4 = Vector3.Distance(J4.transform.position, J3.transform.position);
d5 = 0;
d6 = 0;
}
void Update ()
{
Posx = px_Slider.value;
Posy = py_Slider.value;
Posz = pz_Slider.value;
Rotx = rx_Slider.value;
Roty = ry_Slider.value;
Rotz = rz_Slider.value;
Px = Posx;
Py = Posy;
Pz = Posz;
c23 = ((cos(t2)*cos(t3)) - (sin(t2)*sin(t3)));
s23 = ((cos(t2)*sin(t3)) + (sin(t2)*cos(t3)));
rho = Mathf.Sqrt(Mathf.Pow(Px, 2) + Mathf.Pow(Py, 2));
phi = Mathf.Atan2(Py, Px);
if (pose == 1)
{
t1 = Mathf.Atan2(Py, Px) - Mathf.Atan2(d3, Mathf.Sqrt(Mathf.Pow(Px, 2) + Mathf.Pow(Py, 2) - Mathf.Pow(d3, 2)));
}
if (pose == 2)
{
t1 = Mathf.Atan2(Py, Px) - Mathf.Atan2(d3, -Mathf.Sqrt(Mathf.Pow(Px, 2) + Mathf.Pow(Py, 2) - Mathf.Pow(d3, 2)));
}
K = (Mathf.Pow(Px, 2)+ Mathf.Pow(Py, 2) + Mathf.Pow(Px, 2) - Mathf.Pow(a2, 2) - Mathf.Pow(a3, 2) - Mathf.Pow(d3, 2) - Mathf.Pow(d4, 2)) / (2 * a2);
if (pose == 1)
{
t3 = Mathf.Atan2(a3, d4) - Mathf.Atan2(K, Mathf.Sqrt(Mathf.Pow(a2, 2) + Mathf.Pow(d4, 2) - Mathf.Pow(K, 2)));
}
if (pose == 2)
{
t3 = Mathf.Atan2(a3, d4) - Mathf.Atan2(K, -Mathf.Sqrt(Mathf.Pow(a2, 2) + Mathf.Pow(d4, 2) - Mathf.Pow(K, 2)));
}
t23 = Mathf.Atan2(((-a3 - (a2 * cos(t3))) * Pz) - ((cos(t1) * Px) + (sin(t1) * Py)) * (d4 - (a2 * sin(t3))), ((((a2 * sin(t3)) - a4) * Pz) - ((a3 + (a2 * cos(t3))) * ((cos(t1) * Px) + (sin(t1) * Py)))));
t2 = t23 - t3;
if (sin(t5) != 0) //Joint 5 is at zero i.e. pointing straight out
{
t4 = Mathf.Atan2((-r13 * sin(t1)) + (r23 * cos(t1)), (-r13 * cos(t1) * c23) + (r33 * s23));
}
float t4_detection_window = 0.00001f;
if ((((-a3 - (a2 * cos(t3))) * Pz) - ((cos(t1) * Px) + (sin(t1) * Py)) < t4_detection_window) && (((-r13 * cos(t1) * c23) + (r33 * s23)) < t4_detection_window))
{
t4 = J4Rot;
}
float t5_s5, t5_c5; //Eqn 4.79
t5_s5 = -((r13 * ((cos(t1) * c23 * cos(t4)) + (sin(t1) * sin(t4)))) + (r23 * ((sin(t1) * c23 * cos(t4)) - (cos(t1) * sin(t4)))) - (r33 * (s23 * cos(t4))));
t5_c5 = (r13 * (-cos(t1) * s23)) + (r23 * (-sin(t1) * s23)) + (r33 * -c23);
t5 = Mathf.Atan2(t5_s5, t5_c5);
float t5_s6, t5_c6; //Eqn 4.82
t5_s6 = ((-r11 * ((cos(t1) * c23 * sin(t4)) - (sin(t1) * cos(t4)))) - (r21 * ((sin(t1) * c23 * sin(t4)) + (cos(t1) * cos(t4)))) + (r31 * (s23 * sin(t4))));
t5_c6 = (r11 * ((((cos(t1) * c23 * cos(t4)) + (sin(t1) * sin(t4))) * cos(t5)) - (cos(t1) * s23 * sin(t5)))) + (r21 * ((((sin(t1) * c23 * cos(t4)) + (cos(t1) * sin(t4))) * cos(t5)) - (sin(t1) * s23 * sin(t5)))) - (r31 * ((s23 * cos(t4) * cos(t5)) + (c23 * sin(t5))));
t6 = Mathf.Atan2(t5_s6, t5_c6);
//Update current joint angle for display
J1Rot = J1.transform.localRotation.eulerAngles.z;
J2Rot = J2.transform.localRotation.eulerAngles.y;
J3Rot = J3.transform.localRotation.eulerAngles.y;
J4Rot = J4.transform.localRotation.eulerAngles.z;
J5Rot = J5.transform.localRotation.eulerAngles.y;
J6Rot = J6.transform.localRotation.eulerAngles.z;
}
void p(object o)
{
Debug.Log(o);
}
float sin(float angle)
{
return Mathf.Rad2Deg * Mathf.Sin(angle);
}
float cos(float angle)
{
return Mathf.Rad2Deg * Mathf.Cos(angle);
}
}
The issue is not with the mathematics of what is going on per se, I am just confused at how I interface the three values of the X, Y, and Z rotation for the sliders (which represent the desired orientation) with these equations. For the translation component it is easy, the slider values are simply equal to Px, Py and Pz in the IK equation set. But in his equations he references r11, r23, etc, which are the orientation components. I am unsure how to replace these values (r11, r12, etc) with the slider values.
Any ideas?
Edit 2: I should also say that these sliders would be for positioning the tool center point. The XYZ sliders will give the translation and the others would give the orientation, relative to the base frame. I hope this all makes sense. The goal is to be able to use these sliders in a similar fashion to how one would jog a real robot in world mode (as opposed to joint mode). I then pass these calculated angle values to the transform.rotation component of each joint in Unity.
So what I am really asking is given the three numbers that the rotation sliders produce (XRot, YRot and ZRot), how do I plug those three numbers into the IK equations?
|
The goal of my project is to get a real-time location information of roobma. The information includes the x position, the y position and the angle (12 o'clock direction is 0 degree, 3 o'clock is 90 degree and so on).
I am using C++ to program the roomba.
I want to use "Stream" (Opcode: 148) to get the date, but it doesn't work for me.
This is the part I am getting stuck at:
unsigned char streamCommand[3];
streamCommand[0]=148;
streamCommand[1]=1;
streamCommand[2]=19;
write(robot,&streamCommand,3);
signed char readStream[6];
bool indicator=true;
while(indicator){
string command="";
cout<<"Please input your command"<<endl;
cin>>command;
action(command);
read(robot,&readStream,6);
cout<<"[0]="<<readStream[0]<<endl;
cout<<"[1]="<<readStream[1]<<endl;
cout<<"[2]="<<readStream[2]<<endl;
cout<<"[3]="<<readStream[3]<<endl;
cout<<"[4]="<<readStream[4]<<endl;
cout<<"[5]="<<readStream[5]<<endl;
}
return 0;
}
streamCommand is a read command to ask roomba to send back data in stream.
readStream is the array that I store my retrieve data.
Why is this code not working?
|
I am building a quadcopter using the tutorial The Ultimate PVC Quadcopter.
When ever I go to lift off, at full throttle, the motors spin but the quadcopter doesn't go anywhere. I have checked again and again and the motors are spinning in the right direction, and have the right propellers. Does anybody know why my quad won't fly?
I am using a KK2.1.5 flight controller, propellers marked with 1045r on the counter clockwise motors and the propellers just marked with 1045 on the clockwise motors.
If I switch the propellers, 1045r clockwise and 1045 counter clockwise, then my quadcopter flips over.
The layout of the motors is:
1. CW 2. CCW
4. CCW 3. CW
I am a beginner and this is the first drone I have built/owned.
P.S. My quadcopter weighs 3.2 pounds, is using 980 kv motors, 10" propellers, 20 A ESCs, and a 3S 50C 2200 mAh Li-Po battery.
|
I am currently trying to implement a particle filter an a robot in a view to localize it on a 2D plane (i.e. to determine x, y and its orientation theta ). I am using a LIDAR which give me (alpha, d) with alpha the angle of measurement and d the distance measured at this angle. For now, I can compute the theoretical measures for each of my particle. But I am struggling with the evaluation function (the function that will give me the probability (or weight) of a particle considering the real measures).
Suppose my LIDAR give me 5 values per rotation (0°, 72°, 144°, 216°, 288°), thus I store one rotation in an array (5000mm is my maximum value) :
Real LIDAR value : [5000, 5000, 350, 5000, 5000]
Particle 1 : [5000, 5000, 5000, 350, 5000]
Particle 2 : [5000, 5000, 5000, 5000, 350]
In this example, I want the function to return a higher probability (or weight) for Particle 1 than for Particle 2 (72° error vs 144°).
For now I am just doing the invert of the sum of the absolute difference between the two value at the same place in the array (e.g. for Particle 1 : 1 / (5000-5000 + 5000-5000 + 5000-350 + 5000-350 + 5000-5000)). The problem with this method is that, in this example, Particle 1 and 2 have the same probability.
So, what kind of function should I use to have the probability of a particle to be the right one with those kind of measurements ?
PS : I am trying to adapt what is in this course : https://classroom.udacity.com/courses/cs373/lessons/48704330/concepts/487500080923# to my problem.
|
I want to implement a real-time stereo vision system for long range (up to 100m) depth estimation. I know that there are some general considerations as described in this SOV post. I have seen some typical cameras such as zed stereo cameras which has limited range (max. 20m).
Maximum allowable baseline distance between cameras is 0.5m and about field of view, i think that a camera with 8mm (or 12 or 16) focal length can provide reasonable FOV. I need depth resolustions at 100m to be 1% or maybe lower.
Are ip cameras proper for such applications? If no, Can anyone please suggest proper cameras for application in long range stereo vision system?
I will be grateful for any information you can provide.
|
I am learning Simmechanics Matlab to do inverse dynamics for 4 DOF robotic arm. I read many examples to input motion to revolute joints like through PID, slider gain, sine waves, signal Builder etc. But these are not fulfilling my purpose as I have to rotate angles within limits and automatically. For example when I used sine wave signal, it continuously rotate until simulation time is not over. So, basically what I have is angles to rotate (through inverse kinematics) and now I want to find out torque required to reach that pose. How I can do this? How to create signal which fit in this scenario.
Thanks.
|
I have this figure in which motion is given to revolute joint in Simmechanics.
In this, constant 2 is used and then integration is used. I want to know what will be effect of 2 here and how integration is happening. What is the actual value of input here. Is it 2 degree or 2 degree/sec or something else. Is 2 is the upper limit of motion or lower limit? What will happen if replace 2 with 5.
Thanks.
|
I have an accelerometer mounted to an inverted pendulum (i.e. a cart-pole robot) which I'm using to measure the tilt angle from the vertical upright position (+y direction). If the inverted pendulum is held motionless at a fixed angle, the accelerometer essentially detects the gravity direction as a vector $(g_x,g_y)$ and the tilt angle $\theta$ can be determined by
$$\theta=tan^{-1}(\frac{g_x}{\sqrt{g_x^2+g_y^2}}).$$
However, if the inverted pendulum is in motion (e.g. if I'm dynamically trying to balance it like a cart-pole system), the pendulum itself is accelerating the accelerometer in a direction which is not necessarily the gravity direction. Acceleration from the cart may also distort the measurement of the gravity direction by the accelerometer as well. In such a case, I don't think the formula above is necessarily appropriate.
Of course, I'm not 100% sure that the swinging motion of the pendulum and the forward-backward motion of the cart are significant enough to distort the angle measurement on the accelerometer. I presume that if I start with initial conditions close to $\theta=0$ on the pendulum, it shouldn't be that significant. Nonetheless, if the pendulum is significantly perturbed by a disturbance force, I think the accelerometer's measurements must be compensated in some way.
How can I compensate for pendulum and cart accelerations when using an accelerometer to detect the tilt angle?
|
void setup() {
pinMode(A0, INPUT);
Serial.begin(9600);
}
void loop() {
if (digitalRead(A0)== HIGH) {
Serial.println("YES");
}
}
Not giving any input when i am supplying input via push button. I have 5 v supply through 10 kohm resistor to push button and then have other side to A0 and a led to ground. It takes input when I take out wire connected to A0 and just leave it unconnected.
|
For a home robotics project I just bought a BerryIMU to connect it to my Raspberry Pi. After hooking it up I ran the provided Python code to read out some values while moving it around.
If I keep the IMU (more or less) straight and in Northern direction I get the following line of output:
ACCX Angle 0.60
ACCY Angle 4.58
GRYX Angle -125.14
GYRY Angle 114.15
GYRZ Angle 93.74
CFangleX Angle -0.26
CFangleY Angle 4.45
HEADING 1.02
tiltCompensatedHeading 3.73
kalmanX 0.48
kalmanY 4.43
I am most interested in the compass (in 360 degrees), and how much it tilts right/left and front/back.
As far as I understand, the tiltCompensatedHeading tells me that it points 3.73 degrees right of the magnetic north. And I think kalmanX and kalmanY should give me the tilting of the IMU to the left/right (X) and to the front/back (Y) (compensated by a Kalman filter for smoothening).
So I played around with it and saw what the numbers did. In the images below I look slightly down on it. I hope the description on it explains how you see it.
From what you see here the X and Y degrees independently behave as I would expect them to. But what I don't understand is why "the other one" is always between 90 and 130. So if I tilt it 90 degrees forward I would expect
X ≈ 0
Y ≈ 90
similarly, if I tilt it 90 degrees backward, I would expect
X ≈ 0
Y ≈ -90
Instead X is around 100 for both of them and I really don't understand why it's not around 0.
Does anybody see the logic in this? What am I missing?
|
I am creating a biped robot with an arduino and I have the lower part of the body complete, the legs and hip joints. Before I am going to 3D print the top part of the body I wanted to make sure that it would walk. So when I started the walking tests I could not get the robot to go onto one foot no matter what I tried. The foot would just raise up the body and not lift the other foot here is a picture of what is happening:
and this is what I want to happen:
Does it just need more weight on the top or is there a specific sequence of movements that I can do? I have two hip servo that moves left and right, a knee servo and a foot servo that move side to side.
|
I am stuck with understanding how can I make my robot move along planned path. For instance, if we have a grid map of an environment and applied, for example, A* to plan a path then after that we have to make our robot move through each cell in our path. Assuming that we know center coordinates of cells, the task is to generate control commands which will lead robot along the trajectory.
I have two differential wheeled robot so equations of motion are going to be like these, where b is a distance between wheels:
$v = \frac{1}{2}(v_{1}+v_{2})\\
\dot\theta=\frac{1}{b}(v_{2}-v_{1})\\
\theta = \frac{\delta t}{b}(v_{2}-v_{1}) + \theta_{0}\\
\dot x = v\cos(\theta)\\
\dot y = v\sin(\theta)\\
x = x_{0} + \frac{b(v_{1}+v_{2})}{2(v_{2}-v_{1})}(\sin(\theta)-\sin(\theta_{0}))\\
y = y_{0} - \frac{b(v_{1}+v_{2})}{2(v_{2}-v_{1})}(\cos(\theta)-\cos(\theta_{0}))
$
Suppose that we can control speeds of both wheels therefore we are able to set any possible angular and linear velocities. So what actually I have to do with these equations to make robot move through each cell?
Moreover there may be different constraints like moving with constant linear speed etc. I understand that I have to solve these equations somehow.
Will appreciate practical advises, certain names of algorithms and etc. Thanks!
|
I'm trying to apply modified DH parameters (from Craig's version) to Puma 560.
As per modified DH says,
And the robot Puma 560 with axes and frame are,
As per above sign convention, the sign of d2 and d3 should be negative. However, for the correct result, it seems that the sign of d2 should be positive.
My question is, should the sign here be positive and if yes, then doesn't it contradict the sign convention for above mentioned modified DH convention?
|
I am using the ros_arduino_bridge to control a robot, by connecting the Arduino to a main pc with a usb cable. I was thinking of using the Arduino with main pc with a serial cable and then doing real time control of the Arduino by using the real time clock.
Is it possible to communicate with the Arduino in hard real time by using the real time clock and serial connections? I want to use Arduino board as a bridge between the main computer and the sensor and motors. And i want to control those sensors and motors in real time. All the high level processing tasks like computer vision and motion planning will be running in the main computer which then sends the commands to motors from the Arduino. So it is just acting like a bridge.
The reason I want hard real time is so that my robot can control its joints at very high and accurate speed so that the robot can do human level tasks like running, jumping, assembling some parts, balancing its body while moving(walking, running, jumping) at any speed (which requires the joints to be controlled at very high speed and accuracy) etc. I will be using gazebo simulator to test most of the tasks.
|
In continuation of my question on modified parameter for Puma 560 posted here Modified DH Parameters for Puma 560. Further I used a available dimension for Puma 560 here (FYI: the figure shows dimension in inches, but all following dimensions in DH parameters for length are converted to mm), [ and trial version of RoboDK simulator to check my result. I assigned the frame as show in the figure from first link, the last link is placed to the flange in the figure below with z6 pointing downward while keeping x6 in same direction as x5.
So the DH parameter looked liked,
double alpha[6] = { 0, -90, 0, -90, 90, -90 };
double a[6] = { 0, 0, 431.80, 0, 0, 0 };
double d[6] = { 0, 0, 139.7, 433.07, 0, 55.88 };
I started with joint angles (theta)
double theta[6] = { 0, 0, 0, 0, 0, 0 };
My calculations give me same position as Puma 560 simulator except z values are negatives. Correct position for joint angles of zeroes is x = 431.800, y = 139.700, z = 489.580. I get x = 431.800, y = 139.700, z = -489.580.
But if I put double d[6] = { 0, 0, 139.7, -433.07, 0, -55.88 }; then I get the correct value, x = 431.800, y = 139.700, z = 489.580.
I tested with other joint angles, for which I am getting correct values with negative -433.07 and -55.88 in above. So, they must be negatives.
My question is, why I have to take negative values for d to get correct result? Does this is because the value in this case should be assigned as per base frame (all value above base frame is assigned positive and all below should be assigned negative irrespective of convention.)
Base frame is located at same position as frame 1 (refer to the top most link) I used the same procedure as described in "Introduction to Robotics" by J.J. Craig.
EDIT: below is the codes I am using for the computation. Alpha and theta is converted to radians.
////// Craig Matrix - Modified DH Parameters Convention
mat[0] = cos(theta);
mat[1] = -1 * sin(theta);
mat[2] = 0;
mat[3] = a;
mat[4] = sin(theta) * cos(alpha);
mat[5] = cos(theta) * cos(alpha);
mat[6] = -1 * sin(alpha);
mat[7] = -1 * sin(alpha) * d;
mat[8] = sin(theta) * sin(alpha);
mat[9] = cos(theta) * sin(alpha);
mat[10] = cos(alpha);
mat[11] = cos(alpha) * d;
mat[12] = 0;
mat[13] = 0;
mat[14] = 0;
mat[15] = 1;
Above codes will give a 4 x 4 DH transformation matrix for each frame as per Modified DH conventions (Refer to Modified DH parameters on Wikipedia or J.J. Craig book, the matrix is defined there.)
Now, we multiply all matrices as per,
starting with matrix for frame 6 on the right hand side and multiplying it with preceding joint from left hand side. Repeating the same sequence as noted above. This should give as the location of origin of frame 6 with respect to the base frame of the robot.
|
I am working on obstacle avoidance and path planning in robotics using X80SV robot. The obstacle avoidance module of the robot works well. The programming language I have used in this work is C#. Next, I wanted to visualize the real-time motion of the robot. What should be done? The robot is equipped with ultrasonic sensors, infrared sensors, camera, human sensors etc.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.