instruction
stringlengths
40
28.9k
Do I need to use separate function in PID algorithm to make the bot go LEFT or RIGHT? /************************** Author- Project- PID Code V-2 **************************/ float Kp = 0, Ki = 0, Kd = 0; float error = 0, P = 0, I = 0, D = 0, PID_value = 0; float previous_error = 0, previous_I = 0; int sensor[5] = {0, 0, 0, 0, 0}; int initial_motor_speed = 100; void read_sensor_values(void); void calculate_pid(void); void motor_control(void); void motor_left(void); void motor_right(void); void setup() { pinMode(3, OUTPUT); //PWM Pin 1;9;EN PIN pinMode(5, OUTPUT); //PWM Pin 2;10;EN PIN pinMode(12, OUTPUT); //Left Motor Pin 1;4 pinMode(13, OUTPUT); //Left Motor Pin 2;5 pinMode(7, OUTPUT); //Right Motor Pin 1;6 pinMode(8, OUTPUT); //Right Motor Pin 2;7 Serial.begin(9600); //Enable Serial Communications } void loop() { read_sensor_values(); delay(15); calculate_pid(); delay(15); motor_control(); } void read_sensor_values() { sensor[0] = digitalRead(A0); Serial.print("Sensor[0]:"); Serial.println(sensor[0]); sensor[1] = digitalRead(A1); Serial.print("Sensor[1]:"); Serial.println(sensor[1]); sensor[2] = digitalRead(A2); Serial.print("Sensor[2]:"); Serial.println(sensor[2]); sensor[3] = digitalRead(A3); Serial.print("Sensor[3]:"); Serial.println(sensor[3]); sensor[4] = digitalRead(A4); Serial.print("Sensor[4]:"); Serial.println(sensor[4]); delay(3); // for(int i=0; i<5; i++) // { // sensor[i] = digitalRead(i); // Serial.print("Sensor[i]:"); // Serial.println(sensor[i]); // } if ((sensor[0] == 0) && (sensor[1] == 0) && (sensor[2] == 0) && (sensor[4] == 0) && (sensor[4] == 1)) error = 4; else if ((sensor[0] == 0) && (sensor[1] == 0) && (sensor[2] == 0) && (sensor[4] == 1) && (sensor[4] == 1)) error = 3; else if ((sensor[0] == 0) && (sensor[1] == 0) && (sensor[2] == 0) && (sensor[4] == 1) && (sensor[4] == 0)) error = 2; else if ((sensor[0] == 0) && (sensor[1] == 0) && (sensor[2] == 1) && (sensor[4] == 1) && (sensor[4] == 0)) error = 1; else if ((sensor[0] == 0) && (sensor[1] == 0) && (sensor[2] == 1) && (sensor[4] == 0) && (sensor[4] == 0)) error = 0; else if ((sensor[0] == 0) && (sensor[1] == 1) && (sensor[2] == 1) && (sensor[4] == 0) && (sensor[4] == 0)) error = -1; else if ((sensor[0] == 0) && (sensor[1] == 1) && (sensor[2] == 0) && (sensor[4] == 0) && (sensor[4] == 0)) error = -2; else if ((sensor[0] == 1) && (sensor[1] == 1) && (sensor[2] == 0) && (sensor[4] == 0) && (sensor[4] == 0)) error = -3; else if ((sensor[0] == 1) && (sensor[1] == 0) && (sensor[2] == 0) && (sensor[4] == 0) && (sensor[4] == 0)) error = -4; else if ((sensor[0] == 0) && (sensor[1] == 0) && (sensor[2] == 0) && (sensor[4] == 0) && (sensor[4] == 0)) if (error == -4) error = -5; else error = 5; } void calculate_pid() { P = error; I = I + previous_I; D = error - previous_error; PID_value = (Kp * P) + (Ki * I) + (Kd * D); previous_I = I; previous_error = error; } void motor_control() { // Calculating the effective motor speed: int left_motor_speed = initial_motor_speed - PID_value; int right_motor_speed = initial_motor_speed + PID_value; // Constraining the motor speed: constrain(left_motor_speed, 0, 255); constrain(right_motor_speed, 0, 255); analogWrite(3, initial_motor_speed - PID_value); //Left Motor Speed analogWrite(5, initial_motor_speed + PID_value); //Right Motor Speed //goForward: /*The pin numbers and high, low values are configurable depending on connections */ digitalWrite(12, HIGH); digitalWrite(13, LOW); digitalWrite(7, HIGH); digitalWrite(8, LOW); }
I'm trying to figure out to a problem with a planar manipulator. The manipulator has a base effector and L1 and L2 with 3 joints. Question: INVERSE KINEMATICS Given the posit_ion(x, y, z) of the tool I calculate the values of the joint variables ($\theta_1$, $\theta_2$, $\theta_3$ and $\gamma$) as follow: From the position(x, y, z) I calculate: $\gamma= atan2(z, x)$ $x'= \cos(\gamma)*x + \sin(\gamma)*z$ $y'= y$ I used $x'$ and $y'$ to calculate (through) the angles of joints like this: is it correct? Question FORWARD KINEMATICS: Given the joints variables ($\theta_1$, $\theta_2$, $\theta_3$ and $\gamma$) I need to calculate the position(x, y, z) of the tool. In this case I don't know how to continue. Anyone can help me?
I have a sensor outputting correct, calibrated North-East-Down referenced quaternions describing the orientation of the sensor. It also outputs raw, sensor referenced acceleration data, inclusive of gravity. I want NED free acceleration, without gravity. What I have been doing is rotating the acceleration data by the quaternion, and subtracting gravity from the z axis. this doesnt really make sense to me. I think it should be something more like Rotate acceleration data by the quaternion (brings data to some strange frame? Subtract gravity from z axis Inverse rotation (back to sensor frame?) Second inverse rotation? (goes to NED frame??) Any explanation would be appreciated
I'm working on a masters thesis and need to calculate Mass inertia matrix ($M$), Coriolis/Centrifugal matrix ($C$), and the gravity matrix ($G$) in the equation $M\ddot{\theta} + C\theta + G = \tau$ (to get the dynamic model). I'm trying to calculate PD control in matlab, but I'm unsure if I need to calculate $M$, $C$, $G$ for every orientation of the robot as it moves. I assume so, but I don't want to code it and then find out I didn't need to. Please advise.
On my Bit:Bot from 4Tronix ([see here])1 I have an HC-SR04 ultrasound sensor attached. It uses solely pin15, for both trig and echo. I am able to use Microsoft MakeCode 'block editor' to get a reading () but am unable to find a working solution for MicroPython. The closest I've found is this one on GitHub (below), but I just keep getting error message of '-11' scrolling across the Microbit's LED Matrix. I've checked for error code -11 but nothing exists. from microbit import * class HCSR04: def __init__(self, tpin=pin15, epin=pin15, spin=pin13): self.trigger_pin = tpin self.echo_pin = epin self.sclk_pin = spin def distance_mm(self): spi.init(baudrate=125000, sclk=self.sclk_pin, mosi=self.trigger_pin, miso=self.echo_pin) pre = 0 post = 0 k = -1 length = 500 resp = bytearray(length) resp[0] = 0xFF spi.write_readinto(resp, resp) # find first non zero value try: i, value = next((ind, v) for ind, v in enumerate(resp) if v) except StopIteration: i = -1 if i > 0: pre = bin(value).count("1") # find first non full high value afterwards try: k, value = next((ind, v) for ind, v in enumerate(resp[i:length - 2]) if resp[i + ind + 1] == 0) post = bin(value).count("1") if k else 0 k = k + i except StopIteration: i = -1 dist= -1 if i < 0 else round((pre + (k - i) * 8. + post) * 8 * 0.172) return dist sonar = HCSR04() while True: test = sonar.distance_mm() display.scroll(str(test)) sleep(1000) Please help if you can. Thanks
I am looking for a library (or algorithm) that computes 3D scene (point cloud) from 2 consecutive images of a monocular sequence. I mean something like OpenSfM but only for 2 consecutive frames and also with a known camera calibration data (I dont need to do "bundle adjustment" like most of the SfM libraries do). I do know my absolute camera translation for the scale.
suppose there are two bots.their task is to detect each other and hit one another(as,i am participating in a sumo-bot competition). but if the two of them are using the same kind of sensors like ultrasonic or ir proximity (for object detection) then ,what will happen if they fight face to face? won't there be an interference of the ultrasonic sound wave or infrared ray of two bots? what will happen?? and what is the solution if this happens?
How to make a configuration space of a robot manipulator from a work space ? it's robot manipulator is 3 degree of freedom. Is a robot manipulator with more degrees of freedom the same process ?. Do more links affect the config space?
I ordered this cool little USB endoscope camera, but the cord does not look like any type of USB I have ever seen. I would eventually like to make this act like a webcam, so does that make a difference? It looks like this: The back is just flat. I tried plugging in an Arduino RedBot cord because I had one lying around, it doesn't fit.
Hello every one hope you all alrigt I am working on a mecanum wheel robot and now I am facing a problem to afford a lipo battery which the required voltage to power my robot is 12 So I think that I can exploit the 12v adapter to power my robot and I want to know if its possible or no First I have a cotroller which takes a rated voltage 6v - 12v and rated current 1.5 A And tow motor drivers with driving voltage of 6 - 12 v and driving current of 8A My idea is to take an adapter like this one And connect in parallel with these three chips to power my robot Is that possible ?
I have: Raspberry Pi 3. Pi camera. CC3D flight controller. I have already developed a Python script that decides whether the quadcopter dron has to turn left/right, move straight on or to stop. Is there a way to connect the Raspberry Pi to the flight controller to send it the commands? Is it as easy as it sounds? Edit: I can change the flight controller if necessary.
I am doing mobile robot localisation as a project. Now I stuck in obtaining the equations of EKF localisation. My robot moves straight for 10 seconds parallel to x-axis and robot equipped with 2 sonars - in front and in right side. As a landmark I have two lines in front of the robot and on the right side parallel to x-axis. Could you please help me to obtain the equations?! The map looks like this: The robot is in the origin (0, 0), the landmark line parallel to X-axis coordinate is y=-2000mm, the landmark parallel to y-axis is x=4000. I already implemented the motion-based control which returns estimated pose of the robot and covariance matrix. The estimated pose of the robot is given by S=[Sx, Sy, Sth]. Please help me to find matrices of the correction step What is my observation model Jacobian matrix?! The papers in internet gives me only Jacobian matrix form for nonlinear model, but in my case as my teacher said the model is linear, but I dont know how to obtain this Jacobian matrix which has dimension H[2][3] matrix
I'm a Mechatronics Student Engineer, and still new with the computer vision field and am now working on a project that needs to Tracks a small ball and need to determine it's (x,y,z). 1st) Kinect was bad for detecting small objects. 2nd) Stereo with 2 cameras not hardware synchronized isn't accurate. 3rd) Single Stereo Camera is very expensive and I don't have any information of how dealing with it. So how Can I track a ball to return its coordinates(x,y,z) ?
I want to implement Inverse Kinematics and i understand the inverse jacobian method but the servos i have only have a range from -90 to +90 degrees, how can i implement this detail in the algorithm? I can't simply set each joint to its maximum/minimum if the calculated angle is above/below the constraints because if i do future calculations will not understand that the maximum angle is not the correct path
I am reading particle filtering for robot localisation and specifically the resampling step to avoid particle degeneracy. Can anyone explain me what MC (Monte Carlo) variation means? I saw it couple of times as a benefit of some resampling techniques against others. For example, "Systematic resampling is the scheme preferred by the authors [since it is simple to implement, takes O(N) time, and minimizes the MC variation]" (Arulampalam et al., 2002).
When solving the kinematics problem for a robot using DH parameters, if there are some passive joints between the actuator and the end effector, do I have to define the axis and transformations of passive joints too just like for active joints?
I am using the "Choreography" software to interact with Pepper. I can create english dialogs there. But I have problems to change the language to (e.g.) German. In the settings of a Dialog I can choose other languages (e.g. German). But when I create a "Topic" there is only a checkbox for English. So I am confused at this point. :EDIT: The properties of the project itself does have language settings, too. There you have to check the languages you want to support. Then they are available for the modules. Currently I need to restrict my answer to the "Choreography" software only. The software now offers "Topics" in "German". But Pepper now does nothing. There is a problem I currently can not identify.
I have succesfully gotten my inverse kinematics method working using damped least squares and it has some really good results. My target position also includes a target angle for the end effector, which i calculate in in forward kinematics by adding the angles of the three elevation joints. The problem that occurs though is that when i set an unrealistic target angle it tries to accomplish it before actually getting to the target. So for instance, if i tell it to go to the a point (for example (2,-2,0) as showing below) with a Z height of zero and also tell it that it needs to come in at an angle of pi/2 (facing +Z direction) it has a ton of trouble because it's not physically possible for it to do so. Animation. But if i tell it to go to the same point and come in at a more reasonable angle of -pi/3 (facing downward towards -Z with some angle) it does a lot better because thats a physically realizable angle. Animation. I've tested the same algorithm without the target angle (which changes the jacobian because the target angle is no longer a part of the forward kinematics) and it works great, but i want to be able to have some control over the target angle. How do i implement logic that tells it to focus on getting to the target position first and THEN attempt to get as close to the target angle as possible without compromising the position?
Would appreciate if anyone could help me understand the how if equation 6 and 7 are derived. Helping with any one D22 and C32 would suffice.
I'm making an Iron Man costume for Comic-Con and it has electronics in the hand and head. I'm going to route both to the back to be handled. There is a certain type of wire connector that I want to use to connect across the shoulder and neck and I don't know what it's called. The connector is the same one that small lipo batteries and pc fans use, the one with the clip to prevent unwanted detachment but pushing on it allows for easy removal. What is this type of wire connector called? I uploaded some pictures to make sure you know which ones I'm taking about. imgur.com/a/afeEw
I'm new to all this robotics stuff. Especially to Kalman filter. My initial goal is to have velocity as accurate as possible Here is my case: I have a phone which is mounted, for example in the car. So it has low cost GPS and IMU sensors. 2D GPS gives me: position (longitude, latitude, altitude) position accuracy (error can't be split into east, north, up directions) speed speed accuracy (error can't be split into east, north, up directions) heading angle heading angle accuracy IMU: (separated accelerometer, gyroscope and magnetometer). I fuse them myself Actually it's needed to be mentioned that I can't use magnetometer in my case. Since "car" is faraday cage. So I only can fuse accelerometer and gyroscope. Both of them are outputs from Madgwick AHRS (can get from here rotation matrix, quaternion if needed) and represented in North, East, Up dimensions. What I've done so far: I've implemented linear KF with position & velocity states. But I didn't achieve desired accuracy. Get rid of IMU data from chart above. It's IMU causes that drift. I have GPS updates every 1 second. And IMU with 13 Hz frequency. We can see here that every 13th iteration we have GPS updates and then IMU goes rogue. Used approach: Since I have GPS 1Hz and IMU upto 100Hz. But I took 13Hz in my case. Since I don't need to have so many updates. predict when IMU fires event When GPS fires event. I take latest IMU data. Do predict and then gps (position, velocity) update. Since my primary goal is to have accurate velocity. I don't care much about position and heading angle but... Since velocity correlates with them they can be added to Kalman Filter. Am I right? So my Kalman states are position, velocity and heading angle. Can I use something like? $$ x = x_i + v_i\Delta t + \frac{a_i\Delta t}{2} $$ $$ v = v_i + a_i\Delta t $$ $$ \theta = \theta_i + w_i\Delta t $$ Questions: Could velocity benefit from adding position and heading angle as states to Kalman Filter. Since there is some correlation between them. (Angular velocity impacts on velocity itself). Is it OK to use formulas from Linear motion? Because I have Curvilinear motion in my case. Almost all papers describe (position, velocity) model with KF. Can I take advantage in using EKF? I found some papers that mention odometry word. And seems like they have the same model. (pos, velocity, angle) What if after all manipulations velocity is still inaccurate? Should I apply additional instruments after KF? Can I somehow take advantage of current location and prev. location points? (For example, calculate velocity from two points. Of course it means that my unit moves linear and not by curve). Then somehow correct my predicted KF result with this velocity. Please help me with modeling Kalman Filter. And give an advice how to achieve best velocity accuracy. Thanks!
I am trying to convert a laser scan into point cloud so i can use its cartesian cordintes for computer vision purpose. But when i do it it seems like the width of point cloud converted is only 1. Before this i was using kinect sensor where both height and width were not 1. i'm getting confused to use the hokuyo sensor/multisense sensor. I am using gazbo for getting the lidar scan data. To convert lidar scan into pointcloud2 data i'm using this program: #include <ros/ros.h> #include <tf/transform_listener.h> #include <laser_geometry/laser_geometry.h> class My_Filter { public: My_Filter(); void scanCallback(const sensor_msgs::LaserScan::ConstPtr& scan); private: ros::NodeHandle node_; laser_geometry::LaserProjection projector_; tf::TransformListener tfListener_; ros::Publisher point_cloud_publisher_; ros::Subscriber scan_sub_; }; My_Filter::My_Filter(){ scan_sub_ = node_.subscribe<sensor_msgs::LaserScan> ("/multisense/lidar_scan", 100, &My_Filter::scanCallback, this); point_cloud_publisher_ = node_.advertise<sensor_msgs::PointCloud2> ("/my_cloud", 100, false); tfListener_.setExtrapolationLimit(ros::Duration(0.1)); } void My_Filter::scanCallback(const sensor_msgs::LaserScan::ConstPtr& scan){ sensor_msgs::PointCloud2 cloud; projector_.transformLaserScanToPointCloud("/head", *scan, cloud, tfListener_); point_cloud_publisher_.publish(cloud); } int main(int argc, char** argv) { ros::init(argc, argv, "my_filter"); My_Filter filter; ros::spin(); return 0; } Here when i run this node, the height of point cloud is fine but width is only 1. and when i visualize the laser scan or pointcloud or pointcloud2 in rviz even when the lidar is rotating i can only see 2-d points as shown in below screenshot. Why is this happening. what should i do to convert it into 3d point cloud? Here all the point lies in single plane.
I have seen the methods to make a bipedal robot stand and walk keeping its center of gravity in stable mode, but how can i make the robot stand if it falls in ground? For example right now i'm using valkyrie robot in a gazebo, but the problem is once it falls in ground i've to again restart the simulator or respawn the robot. hers the screen shot of this: In internet also their are methods describing ways to make a bipedal robot stable while walking and standing but no way to make it stand after its fallen state. So if anyone knows the method or algorithm or have any reference than you can tell me.
I am a beginner on Drones. I would like to understand the how each motor contribute to thrust on the whole drone. And what is the direction of torque created on each propeller. Everywhere I find a tutorials with complex formulas. It would be great to have a simple answer about how the thrust system works on Quad rotor and the torque mechanism of it. It would be great to know how lowering the speed of one propeller causes the movement in certain direction, and also the relation of propeller motion with pitch, yaw and roll.
I want to control a arduino based robot in real time , i am confused whether simulink real time control is a better option or Using ROS ?? I am familiar with the simulink , My application is to controlling the robot using controller with time delay . Can anyone suggest me which way should i go ?
I am stuck at a problem of solving DH parameters for a simple test mechanism. I know that the given mechanism is a structure but just for pure academic purposes assume that it's a real system. I only want to know the correct way to solve a system with DH parameters, when there is only one active joint and more than one passive joints that depend totally on the command of the actuated joint. -- Joints (All joints are Revolute): J1 (Actuated by some motor) J2 (Passive) J3 (Passive) -- Links: L1,L2 (Rigid Links) The link 1 and link 2 are not directly connected at joint J2 but there is a small link that connects link L1 with joint J2. It is not visible in top view but it allows the system to work in real life. But please just ignore it for this example! -- The images are the top view with the z axis for each joint coming out of the paper. -- Assuming clockwise rotation as positive for this problem. ![Page#1] [edit]I realize that the transformations below are wrong and not done by following the DH convention. I should have but instead I followed the standard method. But once I apply DH coordinate axis, I should use the DH method. But this still does not change the real question in the last picture. Thank you ![Page#2] @Chuck: Problem: J1 is the only active joint in the system. j2 and J3 are passive joints. I want to find the kinematic equations of the system(basically, I want to find the Joint angle of J3(Pitch) as a function of Joint angle of J1). I have shown the DH coordinate axis in the first picture. I can only measure J1 angle with an encoder but J2 and J3 are passive, so I have to find their relation with J1 to get their value for the transformation matrix. Since the choice of first joints x axis is arbitrary in DH method, I chose x0 axis parallel to link L2 and then found theta2 as a function of theta1. I want to know, is this the correct way to find the passive joints angle as a function of an active joint? If my solution of joint paramter Theta2 = 180 - Theta1 is not correct, then how would you find the joint parameter Theta2 as a function of Theta1, since you can not measure Theta2(for it is passive), you can only find Theta2 mathematically and its very important to find Theta2 to get the transformation matrix from J3 coordinate system to J1 coordinate system(The goal)... I expect the system to move like this when the Joint 1 motor is actuated in clockwise direction ![Page#3] EDIT Hey chuck, As the following sketch shows, the motor housing is fixed on the box, it rotates with the same angle as the box about joint J4. The blue dot in the previous image is the motor axis of rotation which I called J1 because it links the motor shaft with the link L12. If the motor shaft is rotated anti-clockwise by an angle Pfi, it makes the whole box move with it but not with the same angle. I have to find the box rotation about joint J4 as a result of this rotation of motor shaft about J1 by Phi.. As you explained, I can solve the 4 bar mechanism but how can this motors angle of rotation be related to the 4 bar mechasim's four angles Theta12,Theta23,Theta34 and Theta41 ? ![Page#4] After anti clockwise rotation.. ![Page#5]
What is the dimension of the Jacobian of a robot with 9 actuators and 6 degrees of freedom? 6 x 9 is my guess but not sure. Can someone please explain ?
I am trying to solve for the Forward kinematics of a spatial parallel manipulator. Its loop closure (LC) equations are pretty long, and are being solved using numerical methods. As I am adding more links to my manipulator, loop closure equations are becoming bigger and more difficult for the numerical method to handle. It just struck me then. In a CAD program, when I create an assembly of a manipulator and make its links move by dragging the links using mouse, CAD program is instantly solving forward kinematics of that manipulator. How come the CAD programs respond so fast? Are they solving LC equations all the time? Even if I add more links to my manipulator in CAD program, it has no difference/delay in response!!
I am working on a project "ROS enabled Quadcopter", and am using hector_quadrotor package. But I am facing problems in Flight controller. My world and quadcopter is running in gazebo but I don't know how to fly a quadcopter. There are many controller packages inside hector_quadrotor package. Moreover, I want to run the same on real quadcopter with raspberry pi. Anyone who can help me and send me some reference material. Thank You in advance.
Consider the following Arm: https://www.bcn3dtechnologies.com/en/bcn3d-moveo-the-future-of-learning/ I looked for the DH-Parameters of this arm and I found this: https://github.com/BCN3D/BCN3D-Moveo/issues/15 Although I don't think that this is correct because I think that the first Alpha should not be 0. P.S: A clear photo of the arm:
I wonder how to generate double s-curve velocity profile for multiple DOF trajectory. Since there are constraints on initial and final velocities which can be non-zero it is necessary to synchronize each DOF in time. Therefore firstly I would like to compute trajectory for DOF with the largest displacement and then trying to fit other DOFs in the computed execution time for the former. However I was not able to find anything about generating s-curve profile with given time. Having tried to solve it by myself I came up with a conviction that it is an optimization problem. I tried several approaches but they all seemed to have non-convex cost function and hardly could they satisfy constraints on final velocity. Having spent much time I wondered if there is an easy way to synchronize them?
Consider a multibody robotic system with Lagrangian $ L = \dot{q}^{T}M\dot{q} + V $ and equations of motion of the form $$M(q)\ddot{q}+C(q,\dot{q})\dot{q}+A^{T}(q)\lambda+N(q)=0 $$ where $N=\frac{\partial V(q)}{\partial q}$ and $\lambda$ represents the Lagrange multipliers for the constraint forces (MLS pg. 269; pg. 287 in pdf). $A$ is the constraint matrix from the Pfaffian form $A\dot{q}=0$. We are concerned with holonomic constraints, which can be differentiated to obtain the Pfaffian form. $ C=\sum_{k=1}^{n} \Gamma_{ijk} \dot{q_{k}} $ where $\Gamma_{ijk}$ are the Christoffel symbols of the first kind. The Lagrange multipliers can be calculated using the formula (MLS pg. 270, pg. 288 in pdf) $$\lambda =-(AM^{-1}A^{T})^{-1}AM^{-1}(C\dot{q}-N)$$ This gives the holonomic constraint forces $A^{T}\lambda$. I would like to see some examples of this method for calculating the holonomic constraint forces for planar systems. I have seen this method being applied to calculate the tension in a simple pendulum (MLS pg. 270, pg. 288 in pdf), but I would like to see examples of planar systems with more degrees of freedom. I'm specifically interested in the case of a 2R manipulator shown below. The equations of motion are on pg. 164-165 (pg. 182-183 in the pdf) in MLS. My objectives are: Apply the holonomic constraint $\theta_{1}(t)=0$ and obtain the holonomic constraint force. Show that we get the dynamics of a simple pendulum for the constrained system for initial conditions $\theta_{2}(0)=0, \dot{\theta_{2}}(0)=0$. I have done the calculation for the constraint $\theta_{1}(t)=0$ but I'm not able to show 2. The expression I get for $\lambda$ is $$\lambda= \begin{array}{c} \frac{g (d \text{m2} \text{r2} (\sin \text{t2})-\text{m2} \text{r2} (\cos \text{t2}) ((\cos \text{t2}) b+d))+b d (\sin \text{t2}) \dot{\text{t2}} \dot{\text{t2}}}{d} \\ \end{array} $$ where $(t1, t2)=(\theta_{1},\theta_{2})$ and $a, b, d$ are the geometric and inertial parameters $\alpha, \beta, \delta$ described in MLS (pg. 165; pg. 183 in the pdf). Suggestions for other methods to calculate holonomic constraint forces would be helpful too. References: MLS refers to Murray, R. M., Li, Z., Sastry, S. S., & Sastry, S. S. (1994). A mathematical introduction to robotic manipulation. CRC press.
my question is about localizing points in 2D space. I know exact position (x,y,alpha) of my robot in some room. In room there are two points of unknow location. Robot can find angle between he and two points in room. Also robot can move on room, and find more angles between this two points and his. How to solve this problem, when I want to discover the position of this two points in room? I can move robot where it's needed, but what should I do to discover this positions??
I am working on writing code for a coordinated multi-robot rapidly exploring random tree (RRT) based planner, which would naturally involve a lot of sampling, nearest neighbor searching and 'radius' searching. Because this is a coordinated/cooperative planning step, all robots' paths are incrementally created in every iteration until all robots reach their respective goals, and the planner needs all the robots' positions during the planning phase. I have a basic framework for this working in MATLAB, which was only a proof of concept and not really efficient: but I am not sure what the best way to program it in, say, C++ would be. Normally for an RRT, I would go for a KD tree implementation, but in a multi-robot point of view, the environment would be a joint configuration space and this would mean a pseudo-high-dimensional KD tree: which is not actually high dimensional, but just needs to perform nearest neighbor searches in a space that combines the states (x,y,z, yaw) of all the robots - over and over again during the planning phase. The metric is simple enough, as it is just Euclidean distance, but I don't know if using KD trees for this will be computationally efficient. I'm looking for some suggestions on how to describe the configuration space for an efficient multi-robot RRT (I am thinking of a maximum of five robots) with a state dimension of 4.
We got a IROBOT Roomba 960 for our new apartment. The apartment has however in one door frame there's a step of about 26mm (roughly 1 inch). What's the best way for the Roomba to overcome this obstacle. I was thinking about building a small ramp. Is there a better/easier way to achieve this (except for lifting it manually to the other room)? If not, what angle should I use for the ramp, that the Roomba will be able to climb it on it's own?
I am trying to implement FastSLAM 1.0 and I am using sensory and control data acquired from Pepper via ROS to evaluate my implementation. However I am having major issues when I try and control Pepper's movement via ROS. I am using Ubuntu 16.04 and ROS kinetic. I set up my NAOqi bridging using the NAOqi driver command $ roslaunch naoqi_driver naoqi_driver.launch nao_ip:=<MY_ROBOT_IP> roscore_ip:=localhost network_interface:=wlp1s0 (I was originally using the pepper_bringup package to set up the brigde, however ran into errors stating I had not set up the PYTHONPATH to the NAOqi API properly $ roslaunch pepper_bringup pepper_full_py.launch nao_ip:=<yourRobotIP> roscore_ip:=<roscore_ip>) I then run RViz, having sourced my setup.bash $ rosrun rviz rviz I then find and load pre-configured RViz configuration in my catkin workspace into RViz src/pepper_robot/pepper_description/config/pepper.rviz From there I am able to see Pepper in RViz along with the camera images and sonar . Then I set up moveit so I can control Pepper with the commands $ export NAO_IP=<YOUR_ROBOT_IP> $ roslaunch pepper_dcm_bringup pepper_bringup.launch $ roslaunch pepper_moveit_config moveit_planner.launch This starts the Moveit RViz GUI where I start my planning of the route I want Pepper to execute, following this documentation. However whenever I execute it, it always fails and the error message I get from the terminal says that it cannot access any controls, (I thought I had a screen shot saved of the message, however I don't, but will be working on Pepper again tomorrow, so can acquire one then) Has anyone got experience controlling Pepper using ROS and Moveit and may have had similar issues which they managed to resolve? Otherwise has anyone any tips on how to control Pepper through other means? I have found documentation on ways to control it with a joystick, again if anyone has tried this and been successful, I would be grateful for some advise on how they managed to get it working.
I am trying to make my First Quadcopter but having problem with cc3d connection, The CC3D cable seems extremely complicated. I would have figured the pin out would have started with a red/black/white for the power/ground/signal for the first port of the receiver but the colors are all weird. They are in order: Black Red Blue Yellow White Green Blue Yellow The duplicate colors and the white in the middle is throwing me off. I've searched openpilot.org but can't seem to find a wiring diagram or a pin out diagram to tell whats what. Just wondering if someone can help me out. How should I connect these in Flysky FS-i6 Receiver? cable sequence is below
At 11:13 or 2:22 of this video or 0:09 of this second video the drone itself is on flat surface but the flight controller itself must have been placed in tilted position (not intentionally) on frame a little so the Pitch and Roll angles are not perfectly 0 degrees. When the PID makes corrections to make FC at 0 degree level for hovering over, the drone itself will be tilted so the drone will go in x or y direction. Should we mount FC with this manner to the drone's frame (in a way we read exactly 0 degrees) or are we making correction to these things during flyting with remote controller to make it stable. Or is there a code trick in flight controller like taking the start angles as flat degrees so -2 for example is flat level for it. I hope i am clear. Any help is appriciated.
I am currently making a 2 wheeled SLAM robot that will use an array of either ultrasonic or sharp IR sensors with a particle filter. I also have a MPU-6050 and GY-271 and i am looking to turn it into an AHRS. However, considering that the map is going to be on a 2-D plane, do i really need a full AHRS? Can i not just use a magnetometer to create a compass for rotation readings?
I am trying to build a low-cost SLAM system with an MPU-6050 and GY-271 (magnetometer). Currently, i have a robot with an Arduino that collects the sensor data and a Raspberry Pi that (hopefully) will do the SLAM calculations. I want my robot to be able to use all three sensor readings in SLAM to create a 2D map of the environment. However, considering that i want a 2D map, i will not need all the axis readings correct? I read another post on here where one of the answers said that only the yaw from the gyroscope, and the x and y from the accelerometer would be needed. My question is, how would i implement this into my SLAM robot? I was thinking of passing the accelerometer and odometry readings through a kalman filter on the Arduino and then the same for the gyro and magnetometer readings. Would that be correct? Would i also need to use all the axis (x, y, and z) readings from the magnetometer? Or just one or two axis? Thanks.
I have a couple of tightly related questions regarding EKF-based visual SLAM. It is common in the Kalman based VSLAM litterature to marginalize 3d points and past poses. Condsidering the EKF as a bayesian filter, one can write its prediction step $\tag{1} P(x_t|z_1,...,z_{t-1})=\int_{x_{t-1}} P(x_t,x_{t-1}|z_1,...,z_{t-1})dx_{t-1}$ with $x_i$ and $z_i$ respectively denoting states and measurements. The right side of (1) clearly marginalizes the past state. So, here are the questions: I understand that in EKF based SLAM, the current state needs to contain new features (e.g. newly triangulated points) but does not contain those that are not viewed anymore. So, $x_t$ and $x_{t-1}$ are not exactly the states of the same system. However, I don't see any other way of marginalizing older feature points than using (1). Is this the manner in which older points/features are magrinalized, or are there other intermediary steps? In Key-frame based Bundle Adjustment approaches, one discards redundant information by dropping key-frames (although nowadays, more and more sytems take the filtering approach here and marginalize instead of discarding). I was wondering, since I couldn't find anything relevant, if there exists Key-frame EKFs? Is there an inherent property of the EKF that makes it incompatible with Key-frame selection?
I'm strugling on that problem for a while, so any help is welcome. I need a trajectory representation that is performant for optimization, i.e. I want something that computes quickly. The function I want to optimize takes into account the duration of the trajectory, and samples some points along it to compute some costs related to the environment (distance to obstacles, time to collision,...) and the trajectory (local speed, acceleration and jerk limitation,...). For now I'm using a method based on waypoints and a dynamic local method that is too heavy for my usage. My optimization variables are the waypoints (position and a few derivatives), and the local method computes a spline between each successive waypoint. I think I can make profit of a less accurate method: I don't need to control the exact speed of each waypoints, nor their exact position and timing. To be more specific, here are a few constraint I can think of: the optimization algorithm can have some control on the speed along the trajectory (e.g. by constraining the duration on portions of the trajectory, or specifying speed vector on waypoints...). The speed may strongly vary along the trajectory; I need a quick answer about the feasibility of the trajectory (low CPU usage); In the case the method is based on waypoints, I can use only the waypoints to compute my cost, then the computation of a specific state along the trajectory can be slow, I don't care; Otherwise, I need to quickly access a certain number of states to sample costs along the trajectory. Start and final positions of the trajectory are fixed. I may need to specify their speeds also; The trajectory is continuous in position, speed and acceleration, and jerk is bounded; For now I'm focusing on holonomic problems: 2D/3D navigation, with orientation, arm motion... I've been thinking of B-Splines, or Bézier curves, but I don't know much of their maths, and I'm not sure whether they fit my requirements... Hope my question is clear enough, any advice is welcome! Note: I'm not looking for a motion-planner, I want to optimize an existing trajectory to global/local optimum regarding some cost and constraints.
I like to know are there simple equations to find how much motor power is needed to lifting 1 KG with a quad rotor? Also if we use gas powered engine, how can we find a relation between lifting weigh and fuel consumption?
I have gone through my lecture notes but I can't seem to figure out the five different steps of a Kalman filter and how these steps are Divided into prediction and correction and how the state and the uncertainty evolves with each step. I would appreciate it if someone could explain it to me please
I am trying to figure out if WiFi from TP-Link N450 Wireless WiFi Router could reliably get through lexan. I need the wifi to be able to have a constant, reliable signal.
Can anyone please share me the command how can i find the frequency of CAN send and CAN receive messages. I'm sending CAN messages using a serial port and receiving it with a peak CAN usb. There is no error no drop messages between both receving and sending sides but i don't know the command to get the frequency. I'm using only simple candump and can send commands in Ubuntu. I know the bps but i want to know the frequency of sending and receiving sides. Thank you
Different question from the last one since I still struggle with the concept. In his book "Probabilistic Robotics", Thrun has the following equation: (Context here) (5.49) $p(x_t|u_t,x_{t-1},m) = \eta p(m|x_t,u_t,x_{t-1})p(x_t|u_t,x_{t-1}) $ I can't really wrap my head around any probability that involves $p(m)$ or $p(m|...)$. What is the probability of a map? Isn't it always 1, because i have just one map? How do I have to imagine $p(m|x_t,u_t,x_{t-1}$)? In his book, he kind of just handwaves it...
This is a homework question from edx course Robot Mechanics and Control, Part II Given the following and expressing its forward kinematics as $T = e^{[S_1]\theta_1} ... e^{[S_6]\theta_6}M$ It is can be found (and also shown in the answer) that $$ [S_2] = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & -2L \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$ and $$ M = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 3L \\ 0 & 0 & -1 & -2L \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$ Part 4b) requires expressing the forward kinematics in the form of $T = Me^{[B_1]\theta_1} ... e^{[B_n]\theta_n}$ and finding $[B_2]$ I wanted to try deriving the answer using the following property (as found in the lecture notes page 20 and in the lecture around 3:00, basically using property $Pe^A = e^{PAP^{-1}}P$ for any invertible $P$): $$e^{[S_1]\theta_1} ... e^{[S_n]\theta_n}M = Me^{[B_1]\theta_1} ... e^{[B_n]\theta_n}$$ where $$[B_i] = M^{-1}[S_i]M$$ I get $$ M^{-1} = \begin{bmatrix} 0 & 1 & 0 & -3L \\ 1 & 0 & 0 & 0 \\ 0 & 0 & -1 & -2L \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$ Using the property $$[B_i] = M^{-1} [S_i] M$$ to calculate $B_2$ and I get $$ [B_2] = \begin{bmatrix} 0 & 0 & 1 & -3L \\ 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & -5L \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$ which is obviously wrong. What am I doing incorrectly? Thanks in advance The correct answer is $$ [B_2] = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & -3L \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$ Full question with answers below:
The current Kawasaki robot controllers are equipped with a power connector different than the one on prior controllers (see pictures below). I need to order this type of plug and can not find out the type of the connector or any information about it. Does anyone recognise this connector and could provide it's type or any information that could lead to the type?
I have a robotic system to develop, in first phase of the project I need to track an object. So I placed a geometrical marker on the object to estimate it's pose (rotation, translation). It all works fine until the target object moves a little faster and motion blur is introduced to the input images from camera. So I have two options: Either deconvolve the images to remove blur before object tracking OR use a camera with very high frames per second The problem is first solution is too slow for a real-time system and second option is too expensive. Please tell me if there is a more efficient technique than older ones (Weiner, Lucy-Richardson, Blind Deconvolution) to remove motion blur. I believe there must be something, because mobile robot need real-time calculations from their camera inputs and motion blur is a common problem when robot or the target object is moving. I'm using Python 2.7 with Opencv 3 and ROS Kinetic. Otherwise let me know how many fps are sufficient to observe a human who can walk, run, fall from a distance of 10-20 ft. I can't go and buy a 400-1000 fps camera just to check the output. Below are the images of some sample markers. Actual Markers: Blurred Image when Moving
I think there are some quad-copter at the market those can find you from your smart phone. I don't know how accurate it is and how does it works? Is there a solution that I can build an android/ios application that quad-copter can find exactly the smart phone which has this app installed?
I have several questions about the process of marginalization in SLAM algorithms: 0 - What are the mathematic intuition behind marginalization process 1 - I know marginalization of states or points is related with removing nodes from the graph and keeping the information they have, how this information is kept? I mean, how the information of the marginalized states are passed to the remaining state? 2 - Why in bundle adjustment, the points are set marginalized? (example of ORBSLAM implementation) Thank you very much!
I am doing a project where I've robot which will be using kinect + ROS for Simultaneous Localization and Mapping. Which board should I use? I've heard that Raspberry pi , beaglebone are not sufficient.
I like to know can we make a web site or application(or using something free like google map) to routing a way and our robot can follow the line exactly? For example I like to build a quad-copter and specify some streets on the map and my quad-copter flies upon that specified streets.
If I use an industrial robot with its hardware controller, should I install linux hard real-time like xenomai to use this kind of hardware controller? If not, when is the case to use xenomai?
I would like to generate a trajectory for a quadrotor UAV and I am using an architecture which allows to do so only by specifying position, velocity and acceleration. I only know that I want to do a circle in 2D (fixed z for example) and therefore I should give a sine wave on the x and a cosine on the y. So far everything is ok. I am working in MATLAB/Simulink and therefore to generate the position I simply use an integrator block and I get it. What about the Acceleration? If I do just a derivative of the velocity my trajectory is not working, I don't know why. Is there a better way to do that? A friend suggested me a second order filter to generate the trajectory but I don't know what he really means. Could you please help me? Thanks.
I understand that motors come with recommended speed controllers, be it encoders or esc. But what is the difference between the two?
I am developing an inverted pendulum that uses a differential drive robot as base. The goal is to create a benchmark to practice my control using ROS, so I can implement what I learn on the Modern Control Engineering (5th Edition) [Katsuhiko Ogata] , and other books and Journals. However, I am running into problems because of my lack of experience. My hardware is composed of 2 Nema 17 stepper motors, 1 Energizer XP18000 as main battery,2 Motor Drivers DRV8825, 1 Arduino Mega, 1 Raspberry Pi 3, 1 Honeywell rotary Encoder 600 series. I originally tried to do something similar to the b-robot but it ended being better to build a cart for benchmark because it is more stable and I can use an encoder to know the position of the rod. The problems are: low torque and too much mass, the robot is slow due to inertia. Arduino Mega, and basically any AVR microcontroller is too slow for Real-Time, the fastest I can communicate with Raspberry Pi3 is 30Hz. I am thinking to do these changes: Change the Arduino by a Nucleo Mbed board. Running the Raspberry pi on non-gui or snap like the ubuntu core, but with Ubuntu Mate. Change the motor for DC or Servo motor. I need to redesign the robot structure, making it small and light. I would like advises on which hardware select for developing a precision robot. Which motors and motor drivers. If Nucleo boards would work well. This kind of things. I also would like to know the best practices for programming the controllers for ROS. I attached an image of the robot.
I mounted VLP-16 on the roof of my car while driving, I was expecting the lidar to detect the lanes on the ground since the white lines should have higher reflectivity. But in fact I cannot see any patterns on the ground. This is my rviz screenshot, I can get some higher intensity value on vertical objects, but cannot detect the lanes on the ground. Is this normal? Do I need to calibrate my lidar for lane detection?
I am doing the robot localisation project. Now I stuck on the effect of measurement and process noise in Kalman filtering. Could you please explain what impacts they have while estimating the position of the robot?! I mean how the error ellipsoid(which shows the pose uncertainty) changes with respect to changes in process and measurement noise?! To estimate the pose of the robot, the two sources of information (dead-reckoning estimation and sensor measurements) are combined with Kalman filter. Cheers, nekromant
As part of an internship I was asked to design and develop the core control system for an autonomous small-scale (2m length) solar vessel to be able to sail around the Baltic Sea. The boat should be able to sail following predefined waypoints but, thanks to AIS, Camera (collision avoidance) and a path planning algorithm, redefine its route according to the obstacles sensed. For the hardware part it runs a Raspberry Pi with the high level navigation system and an Arduino to control propeller and actuators as well as provide basic navigation functions in case of Raspberry failure. Now, before digging into coding I checked for existing solutions and found out the ROS (Robot OS) middleware, which comes with interesting abstractions for the multi-threading and multi-processing, message exchange locally and among diverse hardware architectures (Raspberry and Arduino in this case). However, I am concerned ROS would add considerable load on the Raspberry processor and increase power consumption and it would prevent fine-grained control over hardware, probably system instability too on the long run. The control software has to access sleep functions on sensors and on the Pi itself, in case of power shortages, to dynamically suspend and restart processes and it needs to run 24/7 for months without human interaction. Is ROS suited for these tasks or should I think about creating a custom software from scratch? Thanks
I would bet that usually these are compiled for better performance, and of lawful reasons (make it harder for any user to save code without decompiling), though I'm not sure. In any case, I believe performance is the major consideration (I'm very new to programming and cannot tell how data processing of robots is different, say, than that of high-end PC games or heavy PC software and if performance considerations will be usually similar or how different). Are robot's codes usually installed when compiled from source code in an external computer or as pure source codes interpreted by some engine installed on the robots operating system? (please example on JavaScript and Raspberry pi if possible at the moment).
I have a Roomba 871 and I am trying to communicate with it through the sci port. I am using RealmTerm for testing, I am trying it with the How to Program Roomba - RealTerm Terminal guide. If I try it with a baud rate of 57600 the only thing I am able to is to get the Roomba to perform an spot cleaning. To get to that the only thing I have to do is to send it an value of 130-139, 230-239, 330-33... But I have to send it as ASCII not as Numbers, like mentioned in the description above. If I try it with a baud rate of 115200 the only thing I am able to do is to switch it off. But then it is switched off until I lift the Roomba up. Pushing the Power button does noting in that case. To switch it off I send the numbers like in the description above: 140 0 1 62 32 but send as numbers! So this is quite strange and I really don't know what to do, so please help.
My stepper motor specifications: Phase = 2 step angle = 200 step/rev voltage = 12 V current = 0.33A/phase resistance = 34ohm/phase Inductance = 46omh/phase holding torque = 23N.cm min detent torque = 4.6N.cm min I am making a CNC machine so I need the power supply to power 3 of my stepper motor with these ratings. Can someone calculate the power supply voltage and the current I should get to power my CNC stepper motors?
If I have the following objects: A and B. A is a distance "Y" from the ultrasonic sense and B is a distance "x". If B is fix and A is moving, does the ultrasonic sense receive and detects A distance?
We know that the Ultrasonic sensors can't accurately measure the distance to an object that is more than 3 meters away (the distance may vary on brands) has it's reflective surface at a shallow angle (the angle may vary on brands) is too small to reflect enough sound Also the environmental influences such as air tempereture, air pressure, air currents, humidity, mist and types of gas cause inaccurate measurements. Which sensor should i use to get the best measurements ?
I have a closed-loop system with the following discrete-time plant: $P(z) = \frac{0.1262}{z^2-0.3303z+0.07517}$ With a fixed (and horrendously low) sampling time of 0.05 seconds. The plant has a time constant of approximately 0.03 seconds - this was calculated using MATLABs System Identification Toolbox. I am trying to design a PI/PID controller that will attenuate load disturbances (injected after the plant) with frequencies < 1 Hz without amplifying higher frequencies. No matter how much I play with the gain and phase margins, I cannot seem to achieve this. Seen below, is a picture showing a nasty bump between 1 and 6 Hz (circled in red) that I cannot seem to remove and retaining disturbance rejection at low frequencies. I believe that this is an impossible task to accomplish with such a low sampling rate. If so, are there any other control typologies I can take a look at? In the past, I designed a Smith predictor, and yes it improved disturbance rejection but not by very much - I am still getting amplification at higher frequencies. Thank you,
I'm programming a quadcopter controller. I've managed to make it fly. But, I'm not sure how to set up the timings schedule for each part of the software. I have several sensors (gyro, accelerometer, magnetometer, barometer) with output at various frequencies. Those data go into the sensor fusion. Then, the fused data goes through a PID controller. The PID output goes into the motors. So, my questions are: How often should I read samples from a sensor (read the register over i2c) with relation to the frequency I configure the sensor to? How should hardware filters such as a low pass filter on the accelerometer be applied here? How quickly (frequency wise) should I use that data during sensor fusion (such as a complementary filter)? Should the fusion timing be based on when a new sample is ready or do I run the fusion faster or slower than I sample? How do sensors sampling at different speeds affect fusion? For example, my magnetometer has a max data rate of 100hz, but currently, my gyroscope is running at 200hz. Should the PID controller run at a different rate from the fusion?
Heres the Matlab code, I start with x and y and when I pass it through IK and Fk I get back the correct x and y thus I am confident that IK and Fk are correct. x = 10; y = 10; z = 0; a2 = 7.5; a3 = 9; r = sqrt(z^2 + y^2); th1 = atan2(y,x); th3 = acos((r^2 + x^2 - (a2^2 + a3^2))/(2*a2*a3)); th2 = atan2(x,z) - atan2(a3*sin(th3), (a2 + a3*cos(th3))); T0 = compute_dh_matrix(0,0,0,0); T01 = T0*compute_dh_matrix(0, -pi/2, 0, th1); T02 = T01 *compute_dh_matrix(7.5, 0, 0, th2-pi/2); T03 = T02 *compute_dh_matrix(9, 0,0 , th3); T03(1:3,4) %ans = % % 10.0 % 10.0 % 0 I have tower pro micro servos SG90 for base, shoulder and elbow joints.I am giving power using a mobile charger which outputs 5V and 2A.Now I want to try drawing some simple shapes with this robot I tried giving it the sequence of x and y locations with z=0, but the results seem not good.Do I need another DOF on the end effector?Is there any way to map x-y to the theat_start - theta_end for every angle?This is the diagram of the robot, I have a small change as my zero position is upright not with a bend like below but the rotation and angles are the same.. Now I am trying to draw in X, Y plane as Z is zero all the time.a2 and a3 are the link lengths.I don't know how to control the speed of the servos I just hacked the sweep code from Arduino.I just want to draw some basic shapes or a line, I tried giving (9,9), (10,10) and (11,11) thinking it will draw a line. These images show my setup in detail.I haven't taken all the offsets yet.I was getting close but not exact x and y that I entered into the IK->FK functions so I thought I could just work with less accuracy to get results quick.If you can help with IKfor offsets it would be great too. I really hope this is a problem that you guys consider as per the rules and as I have done work on FK and IK I hope this question makes sense.Thank you.
I am trying to understand this IMU calibration video. https://www.youtube.com/watch?v=xF7sLU0fX7k&feature=em-comments In the video the operator gets accelerometer data when pointing each axis in the gravity direction. He then has an algorithm at minute 6:34. I don't know where this algorithm comes from. My understanding of how you get an accelerometer bias is you add the output when an axis is pointing in the gravity direction to the output when that axis is pointing antiparallel to gravity and then you divide by 2. Where is he getting this algorithm?
I'm working on a self-balancing TWIP (Two-Wheel Inverted-Pendulum) robot project which will be using two brushed DC gearmotors with encoders. As there were no speed constraints for the robot (and given a wheel radius of 0.03m) I chose a top speed of 1 m/s for the robot, giving a rated speed requirement of 318.28RPM for the gearmotor. From what I understand, a high enough RPM is required to prevent the motor speed from saturating while the robot is balancing, as this can prevent the robot from maintaining balance i.e. the wheels can’t keep up with the body of the robot, and it falls over. But, an encoder allows for direct measurement and control of the motor's speed, which can prevent it saturating i.e. it can prevent the motor from approaching speeds it can’t handle. So, will the RPM I’ve calculated be fast enough for the robot or, since I’m using motors with encoders, can any motor speed be used?
In our lab we mostly use ATI 6DOF force sensors, but they are so difficult that I just want to kill myself every time I have to use them. The reasons I don't like them are: For the version I have (Mini45), I have to use a NI DAQ to read data from and NI DAQmx drivers are only x86 and mostly windows, not compatible with mac, Arduino or Rasperry pi which I use to control experimental setups. I could have used one of those NetBoxes, but I have to pay a couple of thousands to buy one, probably a couple of thousands to calibrate the sensor, and wait a couple of month to get it calibrated and etc. They are so difficult to use. They have this GUI (only windows) which you have to install (which BTW reminds me of the kind of software we used to use in 90s). They have also some LabView examples. These are way too complicated to get around! Even when you get them to work the signal is horribly noisy and you have to do all kind of magic filtering to get something useful. they are very sensitive. For example if they are in contact with a cold metal or there is air blowing at them (which in my experiments including pneumatics happening a lot), they just give wrong measurements. They are so delicate that if you bend the cable too much or screw them to your structure too hard they are broken and you have to pay a couple of thousands to send them back and fix! the whole sensor+driver+DAQ is like a whole table full of cables and boxes! customer service is bad. They are just like chatbots giving you pre-prepared answers! no forum or user community (to my knowledge) I just don't know why everyone keep using them. I'm really tired of all that wasting my time and resources. I was wondering if you could give me some suggestions. What I need one dimensional force sensor 5V analogue output which I can measure with for example Arduino. Or a USB serial port which I can read with PuTTy for example. up to 1200N resolution is not that important. If I use an Arduino to read then 1200/1023 is the maximum resolution I can read anyway. It must be affordable. not more than 100€ lets say available here in Europe. with good local customer service with good examples and tutorials and good user community I did google first, but there seems to be way too much options that I'm confused which one to choose. Thought maybe you can share experience and give me some insights.
I want to design an octocopter that will respond to incidences of crime and terrorist attacks. Survey and gather intel and come back to HQ. I have used a 16000mAh 6S lipo battery with no lead, but only can flight 20mins. I want this Octocopter to carry radioactive/chemical testing equipment and have a flight time of 45 mins How likely am i to achieve 45min flight times? There must be a way of optimizing flight time. I know i can double the amount of batteries but this may make the craft unstable I may need to scale up on motor size to make payload less of an issue.. The larger the craft the more dangerous and costly the craft. So i'm really interested in optimizing it the best i can. I know there are hydrogen fuel cells, maybe some other sort of alternative battery. Please help thanks
I have recently finished the coursera Motion Planning course and I am looking for a project to do, using ROS, OpenRave, Gazebo and similar tools. My project would be in the area of motion planning for a mobile robot. The objective is to take a turtlebot from point A to point B in an environment with dynamic obstacles. I am looking for suggestions regarding the software architecture and how to connect the above mentioned tools together.
I'm a beginner in robotics and I'm planning to do a basic navigation problem for differential drive robot. I know the concept involved and can code it in the arduino but I came to know about ROS and other robotic toolbox. How is it useful if I wanted to improve myself in the concepts from scratch?
I read in this set of slides on p.11 that the IMU measurements, such as acceleration are affected by bias and noise, as expressed in this equation: $_B \mathbf {\tilde a} _{WB}(t) =\ \mathbf R_{BW}(t)(_W \mathbf a_{WB}(t) -\ _W \mathbf g) + \mathbf b^a(t) + \mathbf n^a(t)$ where $b^a(t)$ is the bias of the accelerometer, $n^a(t)$ is the noise of the accelerometer, ${\tilde a}(t)$ is the measured acceleration, and a(t) is the true acceleration. As for notations: Left subscript denotes the reference frame in which the quantity is expressed. Right subscript {Q}{Frame1}{Frame2} denotes Q of Frame2 with respect to Frame1. Last of all, noises and bias are all in the body frame. What I would like to ask is what does the $R$ term in the equation mean and do? I'm guessing it is a rotation, but why is it needed? Sorry if this question seems trivial, but I'm new to all this and my physics is terrible... Thank you!
I used EKF to observe the quaternion and extract the roll, pitch and yaw angles. The used sensors are Gyro sensor and acceleration sensor. The roll and pitch angle looks like proper but the yaw angle is not proper. In this case , I can't get the correct yaw angle? If I add a magnetic sensor and estimate the state using EKF, could I get the proper yaw angle?
I'm dealing with a problem where sometimes an object I'm looking at using RF signals can appear to be somewhere it's not due to the bouncing of the waves. Currently what I have in place is a circular buffer that keeps the past 200 timestamps correlated to when it was seen it in that zone. Once it sees that the object has been seen in two different zones it calculates the average time it was seen in both zones and returns the zone with the greater time. This works pretty great under ideal conditions but sometimes the zones can be in a tight clusters so it's possible it sees 3 or more zones at once. I understand this isn't exactly an easy problem to solve and I'm not looking for an answer on exactly what to do. I'm not sure if it's even possible to get something with 100 percent correct results but maybe some advice to improve what I have.
I've been working on a module that takes in planar poses $\begin{bmatrix} x_{t_{k}} & y_{t_{k}} & \theta_{t_{k}}\end{bmatrix}^{T}$ and spits out expected robot states $\begin{bmatrix} x_{t_{k}} & y_{t_{k}} & \theta_{t_{k}} & \dot{x}_{t_{k}} & \dot{y}_{t_{k}} & \dot{\theta}_{t_{k}} \end{bmatrix}^{T}$. Essentially, I'm giving the velocity I expect the robot to reach at each pose. I've been looking for the proper terminology for a module like this, both so I can write more readable code and so that I can do a better literature search. What would this kind of thing be called? Thanks!
I am working on a project which involves multiple quadcopters working about 5 km away from the ground control station. The communication is planned to be done using RFD868+ modules on each of the drones from jdrones. When these quadcopters are in motion, will there be packet loss in the telemetry commands sent to the individual quadcopters due to the doppler effect? I believe there should not be any issue as the velocity of the quadcopters is miniscule when compared to the speed of light. Am i right ? Also when multiple quadcopters are trying to communicate using this system, is there any alternative to CSMA CD ? (It is in a mesh topology)
I wonder if Extended Kalman Filter(EKF) is used in robotics, or is only Kalman Filter(KF) used in robotics. Kalman Filter is included in Linear Quadratic Gaussian(LQG) controllers. But how would EKF work i practice? I know how to build a Extended Kalman Filter just by linearizing the mathematical model in the estimated state vector. What is your experience in EKF?
i have been trying to extract rate random walk paramater from the data sheet but couldnt find any clue ... is there any way to do that or what ? how should it be calculated ? thanks !
I am working on a robot project in which I use an arduino mega as the main processing unit, L298N h bridge module as the motor driver and motors like these in a differential drive configuration. I have a problem dealing with this robot which is simply that the open loop behavior is not consistent. For example, if I give the left and right motors the same PWM input, making sure that both motors have the same effective input (measured by a voltmeter), sometimes the robot goes straight ahead, and sometimes veers to the right. This is a problem for me because I am trying to build a model for the robot's behavior (knowing the trajectory given the motor input voltages) and at some point I have to estimate the robot parameters (motor constants, robot inertia and other properties) using the parameter estimation toolbox in simulink. Having inconsistent behavior like this causes the estimated parameters to be strange and unrealistic. Has anybody faced such a problem? what may be the potential causes? non reliable hardware? battery problems? terrain problems (or interaction with the ground)? Thanks in advance.
I’m working on the hand eye calibration for the robotic arm. I attached the camera near the tip of the robot (or end effector) and took around 40 pictures of an asymmetric circles pattern. I implemented the basic hand-eye calibration code (at the bottom) based on the papers such as this (http://people.csail.mit.edu/tieu/stuff/Tsai.pdf). I referred the similar questions such as Hand Eye Calibration or Hand Eye Calibration Solver. The implementation is almost same as this (http://lazax.com/www.cs.columbia.edu/~laza/html/Stewart/matlab/handEye.m). When I ran my code, I noticed that the quality of the result was very bad, especially in the rotation matrix. I calculated the quality based on R squared or the coefficient of determination. R2 for the rotation matrix after the least square regression was always around 0.2~0.3 (regarding the translation vector, R2 was around 0.6). When I compared the returned homogeneous matrix from the ground truth (which I measured and calculated carefully by hand), they are very different as shown below. The rotation around z axis should be around -90 degrees, but the output was around a half.: The output from the code Homogeneous matrix: [[ 0.6628663 0.74871869 0.02212943 44.34775423] [ -0.74841069 0.66234872 0.02672996 -21.83390692] [ 0.00534759 -0.0342871 0.99939772 39.74953567] [ 0. 0. 0. 1. ]] Rotation radian (rz, ry, rx): (-0.8461431892931816, -0.005347615473812833, -0.03429431203996466) Ground truth Homogeneous matrix: [[ -0.01881762 0.9997821 -0.00903642 -70.90496041] [ -0.99782701 -0.01820849 0.06332229 -19.55120885] [ 0.06314395 0.01020836 0.99795222 60.04617152] [ 0. 0. 0. 1. ]] Rotation radian (rz, ry, rx): (-1.5896526911533568, -0.06318598618916291, 0.01022895059953901) My questions are Is it common to get the poor result from the vanilla Tsai’s method? If yes, how can I improve the result? If no, where did I make a mistake? Here is the code I used: import numpy as np from transforms3d.axangles import mat2axangle, axangle2mat def find_hand_to_camera_transform(list_of_cHo, list_of_bHe): """ :param list_of_cHo: List of homogeneous matrices from Camera frame to Object frame(calibration pattern such as chessboard or asymmetric circles) :param list_of_bHe: List of homogeneous matrices from robot's Base frame to End effector (hand) frame :return: eHc: Homogeneous matrix from End effector frame to Camera frame Notation: - H: 4x4 homogeneous matrix - R: 3x3 rotation matrix - T: 3x1 translation matrix (vector) - P: Axis vector for Axis-Angle representation - TH: Angle (theta) for Axis-Angle representation """ # Calculate rotational component lhs = [] rhs = [] for i in range(num_of_poses): bRei = extract_rotation(list_of_bHe[i]) ciRo = extract_rotation(list_of_cHo[i]) for j in range(i + 1, num_of_poses): # We don't to use two times same couples bRej = extract_rotation(list_of_bHe[j]) cjRo = extract_rotation(list_of_cHo[j]) eiRej = np.dot(bRei.T, bRej) # Rotation from i to j ciRcj = np.dot(ciRo, cjRo.T) # Rotation from i to j eiPej, eiTHej = mat2axangle(eiRej) # Note: mat2axangle returns with normalization (norm = 1.0) ciPcj, ciTHcj = mat2axangle(ciRcj) lhs.append(find_skew_matrix(eiPej + ciPcj)) rhs.append(ciPcj - eiPej) lhs = np.array(lhs) lhs = lhs.reshape(lhs.shape[0] * 3, 3) rhs = np.array(rhs) rhs = rhs.reshape(rhs.shape[0] * 3) cPe_, res, _, _ = np.linalg.lstsq(lhs, rhs) r2_rot = 1 - res / (rhs.size * rhs.var()) cTHe = 2 * np.arctan(np.linalg.norm(cPe_)) cPe = 2 * cPe_ / np.sqrt(1 + np.dot(cPe_.reshape(3), cPe_.reshape(3))) cPe = cPe / np.linalg.norm(cPe) cRe = axangle2mat(cPe, cTHe, is_normalized=True) eRc = cRe.T # Calculate translational component lhs = [] rhs = [] for i in range(num_of_poses): bRei = extract_rotation(list_of_bHe[i]) bTei = extract_translation(list_of_bHe[i]) ciTo = extract_translation(list_of_cHo[i]) for j in range(i + 1, num_of_poses): # We don't to use two times same couples bRej = extract_rotation(list_of_bHe[j]) bTej = extract_translation(list_of_bHe[j]) cjTo = extract_translation(list_of_cHo[j]) eiRej = np.dot(bRei.T, bRej) eiTej = np.dot(bRei.T, bTei - bTej) lhs.append(eiRej - np.eye(3)) rhs.append(np.dot(eRc, ciTo) - np.dot(np.dot(eiRej, eRc), cjTo) + eiTej) lhs = np.array(lhs) lhs = lhs.reshape(lhs.shape[0] * 3, 3) rhs = np.array(rhs) rhs = rhs.reshape(rhs.shape[0] * 3) eTc, res, _, _ = np.linalg.lstsq(lhs, rhs) r2_trans = 1 - res / (rhs.size * rhs.var()) eHc = np.eye(4) eHc[:3, :3] = eRc eHc[0:3, 3] = eTc return eHc
i am working on a firefighting robot and i am struggling to find a way to search all rooms in an arena like this i thought of using the Wall follower rule but it will miss a room , if i am understanding it correctly.
Well im trying to model a gyroscope which is MPU-6050 and i am stuck in getting the missalignment and scale factor parameters from its datasheet ... And there is another problem that the names in each datasheet differs from the others My question is how to get those parameters? Thanks !
I have a Raspberry Pi 1 Model A and am interested in attaching a buzzer to its GPIO pins so that the software (running on the pi) can turn the buzzer on/off by sending signals to the GPIO pins. I'm new to electronics so I'm looking for the simplest setup possible. I watched this Youtube video where the author just plops a buzzer down into a breadboard and uses jumper wires to connect the breadboard/buzzer to the pi's GPIO pins. I'd like a similar (simple!) setup. I'm wondering what the make/model/specs are for that buzzer so that I can buy the same one and attach it to my pi/breadboard the exact same way (or if there's a simpler way out there, I'm open to that as well!). Any ideas what the voltage/amperage ratings would need to be so as to be compatible with the pi (without the need for additional things like circuit drivers, resistors, transistors, etc.)? Remember, I'm a total newb here and simpler == better! Thanks in advance!
I'm new, please be gentle :-) I'd like to control the position of a single-axis joint using one cable for flexion and another for extension. In many of the anthropomorphic designs I've seen each of these cables are controlled by a separate servos, and I'm wondering why. I'd think the simplest approach would be to use a single servo, and wrap each cable around its spool in opposite directions. Is there a problem with this approach? (if not, I assume the dual servo design is to control not only the position of the joint, but its stiffness/rigidity?)
From any robot spec sheet: I am assuming that the range of motion given for the J3 or 'U' axis is specified as a function of the position of the lower or J2 axis. Correct? How do I find the absolute range of motion of J2 from the spec sheet? That is, what is the maximum positive and negative value that J3 can rotate? Because if I were to rotate using the range given for J2 then the arm would collide with the base.
I've come across the abbreviation SE several times recently. I know it has to do with the pose of the robot, and the degrees of freedom. Most recently I found it on page 8 of this paper: D. Kragic and H. I. Christensen, “Survey on visual servoing for manipulation,” Royal Institute of Technology (KTH), Stockholm, Sweden, Tech. Rep. ISRN KTH/NA/P-02/01-SE, CVAP259, 2002.
Does computer vision intentionally mimic the vision of a human or is it just coincidental that a good computer vision system (with convolutional neural networks for example) reassembles some properties of the human vision apparatus?
I am trying to do hand eye calibration between the robotic arm end effector and Kinect V1. I am using handeye_calib_camodocal repository for calibration and trying to find $X$ by solving the equation: $$AX=XB$$ I have following 82 transform pairs: I have following support/holder between Kinect and Robotic arm : But i am getting the following translation in meters which is wrong because camera can not be in such a distance from end effector: Translation: 0.690257 0.102063 0.459878 I have following launch file: <launch> <!-- TF names--> <arg name="EETF" default="/tool0_controller" /> <arg name="baseTF" default="/base_link" /> <arg name="ARTagTF" default="/chess" /> <arg name="cameraTF" default="/camera_rgb_frame" /> <!--<arg name="data_folder" default="$(find hand_eye_calib)/launch" />--> <arg name="data_folder" default="$(find hand_eye_calib)" /> <arg name="filename" default="TransformPairs_HEcalibInput.yml" /> <arg name="calibrated_filename" default="CalibratedTransform_HEcalibOutput.yml" /> <!-- running handeeye_calib_camodocal node--> <node pkg="hand_eye_calib" type="hand_eye_calib" name="hand_eye_calib" output="screen"> <!-- hand_eye_calib arg pass --> <param name="ARTagTF" type="str" value="$(arg ARTagTF)" /> <param name="cameraTF" type="str" value="$(arg cameraTF)" /> <param name="EETF" type="str" value="$(arg EETF)" /> <param name="baseTF" type="str" value="$(arg baseTF)" /> <param name="load_transforms_from_file" type="bool" value="false"/> <param name="transform_pairs_record_filename" type="str" value="$(arg data_folder)/$(arg filename)" /> <!-- <param name="transform_pairs_load_filename" type="str" value="$(arg data_folder)/$(arg filename)" /> --> <param name="output_calibrated_transform_filename" type="str" value="$(arg data_folder)/$(arg calibrated_filename)" /> </node> <!-- running camera_to_ros node--> <!-- node pkg="camera_to_ros" type="camera_to_ros" name="camera_to_ros" output="screen"/--> <!-- running extrinsic_calibration node--> <node pkg="camera_calibration" type="extrinsic_calibration" name="extrinsic_calibration" output="screen"/> <!-- running DK node--> </launch>
Not a robotics question in the strictest sense, I guess, but related closely enough, I hope: I have an arm-like articulated two(-plus-one)-joint appliance that I want to use as a 3D input device. It uses an angle measurement device and two IMUs, which are placed as depicted schematically below: The blue boxes depict the positioning and orientation of the IMUs (the arrows point to the IMU's relative "forward" direction). The blue dot/highlighted angle represent the angle being measured. The base can rotate around its centre, counting as the third joint, technically, but shouldn't be of too much relevance here. The arm joint rooted at the base has two rotational degrees of freedom, indicated by the red-ish arrows (it can't rotate around the base's up-direction). The unfilled rectangle represents the data I'd like to infer from the other measurements. edit: I cannot post more accurate schematics here, but if you want to visualise this apparatus a bit better, think of a Geomagic Phantom, except that the lower arm joint is not rotational, but is built more like a classic analog joystick. Note: The positioning of the sensors, especially of the IMUs at the base and on the second arm joint are fixed, so please do not suggest changing these (I can't). I'm now wondering how to compute the orientation of the middle link from the data I have: both IMUs return quaternions $q_0$ and $q_2$, respectively (relative to the magnetic north, measured from their relative forward-direction). My representation for the relative rotation between the two arm links is a quaternion ($q_a$) as well (even though it could just as well be directly represented as an angle, but since I'm performing quaternion math anyways, I might as well have it in this form, too). I'm pretty sure that there must be some way to basically compute what an IMU on the middle link (let's call it $q_1$) would measure from the data I have, but I'm not quite sure about my maths here... My intuition was to compute $q_1 = q_0^{-1} * q_2 * q_a^{-1}$, following from the assumed identity $q_2 = q_0 * q_1 * q_a$, but that doesn't seem to hold. As I feared that the rotation of $q_2$ relative to the joint it resides on influences the computation result, I also computed $q_1 = q_0^{-1} * q_2 * q_{z\pi} * q_a^{-1}$, where $q_{z\pi}$ represents a 90° rotation around the up(=z)-axis. However, my measurements still seem off when I visualise the movements (the $q_1$ movement seems exaggerated compared to the actually induced movement). What else might I miss here? Is my math faulty, or is it possibly only an implementation mistake I made? EDIT2: I found that one major flaw of my maths was the lack of calibration. Adding a calibration pose, I was able to compute the relative orientation between the two IMUs in both the visual model and the actual device and go from there. However, to compute the lower link's orientation, I still rely on an equation like $q_1 = q^*_2 * q_a^{-1}$, with $q^*_2$ being the quaternion that rotates from the relative orientation between $q_0$ and $q_2$ in the calibration pose towards their current relative orientation. I'm still not quite sure if that equation is fully appropriate, but it appears to work okay so far.
I am trying to setup a two way communication with the Baxter Robot using the Robotics System Toolbox from MATLAB. However, I am unable to move the arms or record a trajectory using the rosactionclient command. Is there a method to solve this problem? The examples in the documentation provided by MATLAB uses rosactionclient for the PR2 arm, but is it the same for Baxter as well ?
I'm reading Mechanics of Robotic Manipulation by Matthew T Mason and I stumbled upon the concept of kinematic constraints. The book mentioned about two types of constraints: holonomic and nonholonomic constraints Following is one example of a holonomic constraint mentioned in the book: A rectangular block sliding in a channel with free variations in the coordinates x,y,and theta. The channel imposes a constraint so that the rectangle's y value is fixed. The holonomic constraint is described below: While the following is an example of a nonholonomic constraint mentioned: Suppose that we add a wheel to the block, so it behaves like a unicycle or an ice skate. At any given point in time, the block can move forward and backward, it can rotate about the wheel center, but it cannot move sideways. The nonholonomic constraint equation is described below as: The author also wrote that: ".. it is evident that each independent holonomic constraint reduces the degrees of freedom of the system by one, but a nonholonomic constraint does not." All of the explanations so far seem to contradict my understanding of the physical meaning of robotic holonomic movement: I've always thought that a holonomic robot means that it can move in all directions and hence the total number of controllable degrees of freedom is equal to the total degrees of freedom. But how is that possible when every holonomic constraint reduces the degree of freedom of the system? And how come doesn't a nonholonomic constraint reduce the degree of freedom of the system? Isn't the unicycle constraining the block from moving sideways? Why is the degree of freedom of the system not considered as reduced then? I hope someone can help to clarify my understanding.
Hi, I want to control the speed of a conveyor belt. I can measure the linear velocity of the belt using a encoder mounted at the roller of the belt. My confusion is not with the PID but only the mapping of PID output to PWM for the motor. The PID error is in terms of meter/seconds(m/s) of the conveyor belt. The P, I and, D parts are summed to give a control signal. The motor interprets 0 as 0 PWM and 16393 as 100% PWM. The range from 0 to 16393 is serially communicated to the motor. I want this to be from the PID. My question is: How do I map the PID output based on error(m/s) to PWM for the motor. Example: I am using just PI and ignoring D for now to avoid noise amplification issues. The maximum speed of the belt is when 16393 is sent to it and the speed is 0.5 m/s. I set desired speed as 0.2 m/s, At the start, error = 0.2 m/s Lets say Kp = 4, Ki = 2. output = Kp * error + Ki * sum_of_errors * sampling_time I would get a PID output but it would be very small compared to 16393 needed for the 100 % PWM for the motor. I need 100 % PWM to power motor in the start and gradually decrease the PWM as the belts speed goes from 0 m/s to 0.2 m/s. How can I map this small PID output to 16393 need for the Motors 100 % PWM?
I am trying to implement a scan matcher using Scan based sensor model but I cant figure out how to estimate likelihood for a particular scan. Is there any implementation available ? Would be thankful for help
I understand that choosing a microcontroller is all based on the needs and that there is no perfect Mcu.presently I have to select an MCU for a robot my team is building for the 2018 Abu robotics competition and I want to know how does the specs matter like what clocking speeds are ideal for what applications etc... while selecting one.
I made an autonomous robot with 3 ultrasonic sensors. I want to reduce the noise between the 3 sensors and make it gradually slow when it approaches an obstacle. My code is mentioned below. Please help me in this. Distance 1 is front sensor Distance 2 is left sensor Distance 3 is right sensor. If you can suggest any improvements to this code, I would be grateful. //CODE FOR AUTONOMOUS WITH 3 ULTRASONIC SENSORS /*code written By Faraz Hassan For Autonomous Rover Obstacle Avoidance */ //include newping and adafruit libraries to the code #include <NewPing.h> #include <AFMotor.h> AF_DCMotor motor1(1); AF_DCMotor motor2(2); AF_DCMotor motor3(3); AF_DCMotor motor4(4); int trigger_c = A0; // Controls the pulse sent from the right sensor int echo_c = A1; // Controls the pulse received from the right sensor int trigger_a = A2; // Controls the pulse sent from the left sensor int echo_a = A3; // Controls the pulse received from the left sensor int trigger_b = A4; // Controls the pulse sent from the front sensor int echo_b = A5; // Controls the pulse received from the front sensor int tp = 250;//delay long duration_b,duration_a, duration_c, distance1, distance2, distance3; void setup() { // set all the motor control pins to outputs Serial.begin(9600); pinMode(trigger_b, OUTPUT); // Arduino signal output from trigger_front pinMode(echo_b, INPUT); // Arduino signal input from echo_front pinMode(trigger_a, OUTPUT); //Arduino signal output from trigger_left pinMode(echo_a, INPUT); // Arduino signal input from echo_left pinMode(trigger_c, OUTPUT); // Arduino signal output from trigger_right pinMode(echo_c, INPUT); //Arduino signal input from echo_right } void loop() { // Find distance Sonar…. // Clears the trigPin digitalWrite(trigger_b, LOW); //Sends a 2 µs LOW signal to the trigPin to make sure it’s turned off at the beginning of the program loop. delayMicroseconds(2); // Sets the trigPin on HIGH state for 10 micro seconds digitalWrite(trigger_b, HIGH); // Sends a 10 µs HIGH signal to the trigPin to initiate the sequence of eight 40 KHz ultrasonic pulses sent from the transmitting transducer. delayMicroseconds(10); digitalWrite(trigger_b, LOW); // Reads the echoPin, returns the sound wave travel time in microseconds duration_b= pulseIn(echo_b, HIGH); //Defines the duration variable as the length (in µs) of any HIGH input //signal detected at the echoPin. //The Echo pin output = time it takes the trigger pin emitted ultrasonic pulse to travel to the object and back to the sensor. // Calculating the distance distance1= duration_b*0.034/2; // Defines the distance variable as the duration (time in d = s x t) multiplied by the speed of sound converted from meters per second to centimeters per µs (0.0344 cm/µs). //(d=s*t) * speed of sound converted from meters per second to centimeters per µs (0.0344 cm/µs). // Prints the distance on the Serial Monitor Serial.print("front: "); Serial.print(distance1); Serial.print(" "); Serial.print("left: "); Serial.print(distance2); Serial.print(" "); Serial.print("right: "); Serial.print(distance3); Serial.println(" "); // Find distance Sonar2…. // Clears the trigPin digitalWrite(trigger_a, LOW); delayMicroseconds(2); // Sets the trigPin on HIGH state for 10 micro seconds digitalWrite(trigger_a, HIGH); delayMicroseconds(10); digitalWrite(trigger_a, LOW); // Reads the echoPin, returns the sound wave travel time in microseconds duration_a = pulseIn(echo_a, HIGH); // Calculating the distance distance2= duration_a*0.034/2; // Prints the distance on the Serial Monitor digitalWrite(trigger_c, LOW); delayMicroseconds(2); // Sets the trigPin on HIGH state for 10 micro seconds digitalWrite(trigger_c, HIGH); delayMicroseconds(10); digitalWrite(trigger_c, LOW); // Reads the echoPin, returns the sound wave travel time in microseconds duration_c = pulseIn(echo_c, HIGH); // Calculating the distance distance3= duration_c*0.034/2; motor1.setSpeed(255); motor2.setSpeed(255); motor3.setSpeed(255); motor4.setSpeed(255); motor1.run(FORWARD); motor2.run(FORWARD); motor3.run(FORWARD); motor4.run(FORWARD); if(Serial.available()>0)//Checking is Serial data available { int input = Serial.read(); //Storing value of read data into variable assigned // switch(input) // { // case 'o' : forward(); //Calling respective functions if mathced with case label // break; // // case 'x' : stopMotors(); // break; // default : break; // } } if (distance1 < 28|| distance2 < 28|| distance3 < 28) //DISTANCE1 FRONT SENSOR DISTANCE2 LEFT SENSOR DISTANCE 3 RIGHT SENSOR { stopMotors(); delay (1000); if (distance2 < distance3) // LEFT SENSOR DISTANCE < RIGHT SENSOR DISTANCE { stopMotors(); delay (1000); turnRight(); delay (700); revers(); delay(700); } if (distance2 > distance3) // LEFT SENSOR DISTANCE > RIGHT SENSOR DISTANCE { stopMotors(); delay (1000); turnLeft(); delay (700); revers(); delay(700); } if (distance2 == distance3) // LEFT SENSOR DISTANCE == RIGHT SENSOR DISTANCE { revers(); delay (1000); } } if (distance1 > 28) forward(); // FRONT SENSOR DISTANCE > 25 } void revers() { // turn on motor A Serial.println("<Mars Rover> Backward"); motor1.run(BACKWARD); motor2.run(BACKWARD); motor3.run(BACKWARD); motor4.run(BACKWARD); motor1.setSpeed(190); motor2.setSpeed(190); motor3.setSpeed(190); motor4.setSpeed(190); delay(tp); } void forward() { Serial.println("<Mars Rover> Forward"); motor1.setSpeed(255); motor2.setSpeed(255); motor3.setSpeed(255); motor4.setSpeed(255); motor1.run(FORWARD); motor2.run(FORWARD); motor3.run(FORWARD); motor4.run(FORWARD); delay(tp); } void turnRight() { Serial.println("<Mars Rover> Right"); motor1.setSpeed(255); motor2.setSpeed(255); motor3.setSpeed(255); motor4.setSpeed(255); motor1.run(FORWARD); motor2.run(FORWARD); motor3.run(BACKWARD); motor4.run(BACKWARD); // Turn period delay(tp); } void turnLeft() { Serial.println("<Mars Rover> Left"); motor1.setSpeed(255); motor2.setSpeed(255); motor3.setSpeed(255); motor4.setSpeed(255); motor1.run(BACKWARD); motor2.run(BACKWARD); motor3.run(FORWARD); motor4.run(FORWARD); //Turn period delay(tp); } void stopMotors() { // Stop motors Serial.println("<Mars Rover> Stop"); motor1.setSpeed(0); motor2.setSpeed(0); motor3.setSpeed(0); motor4.setSpeed(0); delay(tp); }
Please note: Although this question tangentially involves a Raspberry Pi, this is really more just a pure robotics question at heart. I'm trying to connect my Raspberry Pi 1 Model A to this buzzer. I've seen wiring diagrams of this in action, and believe I need to put a resistor in between the buzzer and the (3.3V) output GPIO pin. If this is incorrect, please begin by correcting me! My question is: how do I calculate the required ohms of this resistor? This particular buzzer is rated at 6VDC.