instruction
stringlengths
40
28.9k
I'm trying to implement the complimentary filter to get Euler angles using accelerometer and gyroscope data. Attached is the MATLAB code that I have along with a data set. The data corresponds to moving the sensor from 0-90 degrees while attached to a goniometer. The sensor has an inbuilt algorithm that outputs Euler angles too and I'm trying to test the accuracy of this algorithm as it tends to overshoot the angle estimates. The problem with the complimentary filter is that the angles move between (around) negative 40 and positive 40 degrees instead of changing between (around) 0-90 degrees. Can anyone please point out what is wrong and why the complimentary filter isn't working well. clc; clear all; close all; M=importdata('Multiple_Sensors_747000.csv'); A=M.data; [m n]=size(A); a=1; t=m/60; angle=0; for i=1:3723 Acc(a,:)=M.data(i,6:8); % Reading Accelerometer data R_norm(a)=sqrt(Acc(a,1)^2+Acc(a,2)^2+Acc(a,3)^2); % Normalized accelerometer Racc_norm1(a,1)=acos(Acc(a,1)/R_norm(a)); % Angle from accelerometer in X Racc_norm1(a,2)=acos(Acc(a,2)/R_norm(a)); % Angle from accelerometer in Y Racc_norm1(a,3)=acos(Acc(a,3)/R_norm(a)); % Angle from accelerometer in Z Gyroscope(a,:)=M.data(i,3:5); % Reading gyroscope data gyro(a,:)=Gyroscope(a,:)*(1/t); % Integrating gyroscope data r(a,1)=sum(gyro(:,1)); % Angle from gyroscope data r(a,2)=sum(gyro(:,2)); r(a,3)=sum(gyro(:,3)); angle=0.99*(angle+(gyro(a,:)))+(0.01*Racc_norm1(a,:)); % Complimentary filter equation ang(a,:)=angle; a=a+1; end
Multi-Robot task decomposition implies that there is a mission that needs multiple mobile robots like guarding the president's car with a team of drones. This high-level layman interpretation of the mission has to be broken down into algorithmic language. Basically the missions have to be decomposed into atomic tasks which robots can be instructed to do in terms of their existing abilities and intelligence. Here are some papers i found on this problem:- [1] [2] Can someone point some seminal papers on it that tackle the algorithmic aspects of this problem? Pointing to open source codes in any language will also be preferred. Note there is a closely related problem of multi-robot task allocation which i am not referring to.
I would like to follow the instructions here, to allow myself to control the Create2 with a Raspberry Pi 3. However, this source, says that I need a level shifter to protect my Pi's circuitry from the Create's serial logic signal. I'm fairly certain that I don't need the level shifter, since it looks like it's more just used to integrate the camera with the Create 2. But I am not in a rush to burn out either of these devices, so I'd like some verification.
To make a two degree of freedom arm capable of speeds needed to play air hockey, how should the arm be controlled? I'm wondering about speed vs accuracy. The following image is from a project involving a 2DOF drawing arm: For this air hockey project, the general mechanics of that arm seem appropriate. it can move the "hand" around a 2D plane. I also like how both motors are at the base of the arm, instead of having the 2nd motor at the elbow (which adds weight to the arm and slows it). However, the drawing robot is slow. Steppers: I understand that in general, steppers are 'slow'. But, steppers have accurate positioning, which is very important in a game of air hockey. I also know that the steppers can be geared-up, but then they lose torque. The steppers would need to handle the inertia of quick, back-and-forth arm movement. Hobby servos: I've used small servos for other projects, but the accuracy was poor. Most don't have a way to read the angle externally, and those that do, have noisy signals. I'm not sure how strong the hobby servos are, and if they can be accurate enough for this project. I understand that using digital servos improve dead band issues. DC motors with external feedback: The only other method I can think of for controlling the arm would be to use DC motors with sensors like as rotary encoders. I think I would need an absolute rotary encoder. But they seem to be around \$50-$1000, which is a bit much for a solution I'm not sure will even work out. Perhaps there are cheaper solutions to motor angle measurement. I could just use a potentiometer, but then I'm worried about noise again. It's worth noting that I don't know of any easy or affordable way to design my own drivetrain. All I have is a drill, and I don't know how I would mount shafts/bearings and such, even if the gears themselves were affordable. This means that if I need to gear-up or down, I don't think I can unless it's cheap and involves simple tools. So for the arm: DC motors with external feedback, servos, steppers, something else?... Which method would be the best in terms of speed, and which for accuracy? I'm wondering which would cost less as well, but I understand that is a grey area. I'm leaning towards servos out of simplicity. I'm may try digital servos with ball-bearings, in the hope they that will move quick enough, but be strong enough to handle the inertia of the arm. (Note that a 2DOF arm is desired, and not something else like a 3D-printer x-y belt system.)
Doing some research on robotic arms and thinking about getting one, but I was wondering where can I get info on the pros/cons of each one and how they compare to each other. Another important factor is which one has the largest community following.
I am working on Robot localization and navigation in urban environments. I want to use a camera. But I am a little bit confused about LRF (laser range finder) data or other laser data. Why people want to use camera? Why not LRF or other laser data? Can anyone explain please in favor of Camera?
I'm in the final phase of my BeerBot* project, and I'm looking for the best way to power it. BeerBot will spend most of his life sitting in a docking station, keeping his battery charged and running off of wall power. When called upon, he needs to be able to disconnect from the docking station and switch to battery power without interrupting power to his brain. Upon returning to his docking station, he needs to switch back to wall power again. I have a NiMH charge circuit, and I have a circuit for powering the robot from a wall outlet, I'm just not sure how to switch from one to the other without interrupting power. For my testing thus far, I've just been shutting the robot down and manually switching between wall power and battery power, and connecting the battery to an external charger as needed. This is obviously not a permanent solution, since BeerBot needs to be always at the ready - shutting him down to switch over to battery power every time I need a beer would defeat the whole purpose of BeerBot*. So my question is, what is the best way to keep a robot powered from the wall, simultaneously charging the battery, and switching between wall power and battery power automatically when disconnected? I guess this is analogous to the way a laptop works - switching to battery when unplugged and and keeping the battery charged while plugged in. I don't care if it's a ready-made solution or a circuit I can build, I'm just looking for a good way to do this without frying my robot or setting my house on fire. I'm mostly a software guy, so I know just enough about circuit design to be dangerous (and I've fried enough electronics to know that power supplies can be dangerous...). I've been looking at power management ICs, but I'm thinking that might be overkill since I already have a charging circuit and a wall-power circuit, I just need a way to switch between them. Can I just connect both and use a couple of diodes to keep current from flowing in directions it shouldn't, like this? or do I need something more complex? . *If you're curious, BeerBot is a semi-autonomous beer-seeking robot. He sits quietly in the corner listening for commands. Currently his only command comes from a modified Easy Button on my desk, which sends a wireless signal to BeerBot. When he hears the signal, he wakes up, drives autonomously to the refrigerator, opens the door with his robotic arm, uses computer vision to detect a beer bottle (placed strategically on the bottom shelf where he can reach), grabs a cold one, closes the refrigerator door, brings the beer to my desk, and returns to his corner to await my next request. Phase 2 will incorporate a bottle opener.
LIDARs use a pulse of light to measure distance, usually from the time of flight to reflection and back. With a collection of these measurements they can determine their surroundings in two or three dimensions. LIDARs are used as one of the sensor systems for self-driving cars, in addition to cameras and other radar systems. Robotic cars are still in the testing phase, but at some point in the future we can expect a busy intersection filled with them, trying to navigate their way through it. With multiple scanners per car, and possibly multiple beams per scanner, interfering signal sources could go over a hundred even in smaller stages. When time of flight is used to measure the distance to the reflection, the interfering signals would produce multiple "distances", and it would most likely require multiple scans of the same point to average some kind of a reliable value from all the noise. How do LIDAR systems differentiate their own signals from other sources? With the example of robotic cars, could this interference lead to an error state where traffic could gridlock from lack of valid data? Or is this even a problem?
I would like to detect road surface in a park. In the park, only small grass are covered with the both side of the road. That means there is road and the both side of the road are covered by small grass. Is it possible to detect road surface (not grass) using LRF data or other laser sensors? If not, why? if yes, which is better-Camera or laser sensor?
I would like to implement pure pursuit waypoints navigation. we know that, look ahead distance=look ahead gain*vehicle forward velocity How can I calculate look ahead velocity gain/look ahead gain? How can I calculate velocity profile for each waypoints?
I am working on MEMS accelerometers, and I want to understand the difference between, Cross axis sensitivity Axis Misalignment Non Orthogonality In literature, people use it interchangeably. These physical parameters cause deterministic errors, which I want to correct during calibration
I want to know if the Create 2 can interface a laser radar? For example an RPLidar.
I am curious about the travelling distance/turning angle accuracy of Irobot Create 2. E.g., if the program let Create 2 go forward for 10 m, will Create 2 go forward exactly 10 m? What is the possible error for Create 2 for both travelling distance and turning angle? I did not find related information in the Irobot create 2 manual. Is there any one could help me with the above questions?
Plus how should I use a multimeter to test it? P. S. Really sorry for a noon question.
As you know in Bug algorithm there is a simplifying assumption that says the robot has no size and can fit between any arbitrarily small gap in the map. How can we overcome the challenge of robot size. As we don't have any per-calculated map (in sensor based navigation), can we still tackle the problem in configuration space? For example, let's assume that the type of the sensor is Hokuyo URG-04LX Laser Rangefinder. Hence we can visualize the sensor measurements by the visualization matrix $V$: \begin{equation} V_i =\begin{pmatrix}cos(\theta_i )* d_i, \quad sin(\theta_i )* d_i\end{pmatrix}\\ \end{equation} Where $D= [d_1,d_2,\dotsc, d_n]$ is the set of distances, and $\theta_i$ can be calculated as: \begin{equation} \theta_i = \theta_{i-1} + {0.36}^\circ,\qquad \theta_1 = 0 \end{equation} All the information we have about the robot's surrounding at each moment is $V$. I strongly believe that there is no well-formed formula which can define the robot size in this configuration, and also as we don't have any map, but a simple visualization, growing the obstacles by the radius of the robot size in the configuration space just doesn't make sense.
So I had this old lappy on which I used to play with ROS a lot then it got broken so I bought a new one and installed the same distro as that of old one i.e. arch linux. Now when I am installing the ROS via AUR I get this build error please help me fix this. Scanning dependencies of target libqt_gui_cpp_sip [ 85%] Running SIP generator for qt_gui_cpp_sip Python bindings... Traceback (most recent call last): File "/opt/ros/jade/share/python_qt_binding/cmake/sip_configure.py", line 50, in <module> config = Configuration() File "/opt/ros/jade/share/python_qt_binding/cmake/sip_configure.py", line 19, in __init__ ['qmake', '-query'], env=env, universal_newlines=True) File "/usr/lib/python2.7/subprocess.py", line 567, in check_output process = Popen(stdout=PIPE, *popenargs, **kwargs) File "/usr/lib/python2.7/subprocess.py", line 711, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory make[2]: *** [src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/build.make:90: sip/qt_gui_cpp_sip/Makefile] Error 1 make[1]: *** [CMakeFiles/Makefile2:375: src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/all] Error 2 make: *** [Makefile:128: all] Error 2 ==> ERROR: A failure occurred in build(). Aborting...
I have an Arduino connected to ROS through serial port. I wrote an ardiuno code to drive motors. below is my code. //Library to communicate with I2C devices #include "Wire.h" #include <Messenger.h> //Contain definition of maximum limits of various data type #include <limits.h> //Messenger object Messenger Messenger_Handler = Messenger(); ////////////////////////////////////////////////////////////////////////////////////// //Motor Pin definition //Left Motor #define USE_USBCOM #define INA_1 7 #define INB_1 12 //PWM 1 pin #define PWM_1 5 //Right Motor #define INA_2 11 #define INB_2 10 //PWM 2 pin #define PWM_2 6 #define RESET_PIN 4 /////////////////////////////////////////////////////////////////////////////////////// //Motor speed from PC //Motor left and right speed float motor_left_speed = 0; float motor_right_speed = 0; ///////////////////////////////////////////////////////////////// //Setup serial, motors and Reset functions void setup() { //Init Serial port with 115200 baud rate Serial.begin(57600); //Setup Motors SetupMotors(); SetupReset(); //Set up Messenger Messenger_Handler.attach(OnMssageCompleted); } ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Setup Motors() function void SetupMotors() { //Left motor pinMode(INA_1,OUTPUT); pinMode(INB_1,OUTPUT); //Right Motor pinMode(INA_2,OUTPUT); pinMode(INB_2,OUTPUT); } ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Setup Reset() function void SetupReset() { pinMode(RESET_PIN,OUTPUT); ///Conenect RESET Pins to the RESET pin of launchpad,its the 16th PIN digitalWrite(RESET_PIN,HIGH); } ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //MAIN LOOP void loop() { //Read from Serial port Read_From_Serial(); //Update motor values with corresponding speed and send speed values through serial port Update_Motors(); delay(1000); } /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Read from Serial Function void Read_From_Serial() { while(Serial.available() > 0) { int data = Serial.read(); Messenger_Handler.process(data); } } /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //OnMssg Complete function definition void OnMssageCompleted() { char reset[] = "r"; char set_speed[] = "s"; if(Messenger_Handler.checkString(reset)) { Reset(); } if(Messenger_Handler.checkString(set_speed)) { //This will set the speed Set_Speed(); return; } } ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Set speed void Set_Speed() { motor_left_speed = Messenger_Handler.readLong(); motor_right_speed = Messenger_Handler.readLong(); } ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Reset function void Reset() { delay(1000); digitalWrite(RESET_PIN,LOW); } //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Will update both motors void Update_Motors() { moveRightMotor(motor_right_speed); moveLeftMotor(motor_left_speed); } /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //Motor running function void moveRightMotor(float rightServoValue) { if (rightServoValue>0) { digitalWrite(INA_1,HIGH); digitalWrite(INB_1,LOW); analogWrite(PWM_1,rightServoValue); } else if(rightServoValue<0) { digitalWrite(INA_1,LOW); digitalWrite(INB_1,HIGH); analogWrite(PWM_1,abs(rightServoValue)); } else if(rightServoValue == 0) { digitalWrite(INA_1,HIGH); digitalWrite(INB_1,HIGH); } } //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// void moveLeftMotor(float leftServoValue) { if (leftServoValue > 0) { digitalWrite(INA_2,LOW); digitalWrite(INB_2,HIGH); analogWrite(PWM_2,leftServoValue); } else if(leftServoValue < 0) { digitalWrite(INA_2,HIGH); digitalWrite(INB_2,LOW); analogWrite(PWM_2,abs(leftServoValue)); } else if(leftServoValue == 0) { digitalWrite(INA_2,HIGH); digitalWrite(INB_2,HIGH); } } when i load the code to arduino and start rosserial node using command rosrun rosserial_python serial_node.py /dev/ttyACM0 ROS throws below error [ERROR] [WallTime: 1475949610.718804] Unable to sync with device; possible link problem or link software version mismatch such as hydro rosserial_python with groovy Arduino I have tried changing the baud rate and fixing it similar in ROS and arduino but nothing helped
I'm making my own quad controller from scratch. I have fixed one axis on a "pendulum" so that only one axis can move freely ( + mode). Let's say that I have 0 throttle. As you know PID output is added or taken away from throttle value on each motor. My values of PID output are one instance let's say 5. (Quadcopter is laying at 5 degrees angle). Even in that case, one motor will spin at 5 speed, while other will be at -5 (won't move). I'm asking beacuse when I add throttle to my quad and if I angle it at 10 degrees, it bounces on the other side and then again back, untill I add at least half throttle, which is kida agressive and I can't imagine this scenario with 4 motors. So is there any way I can compensate this values? Thanks for the help!
Take a look at this monster lifting a car. Now I don't think one can get that power by plugging into a wall outlet. So how are these huge robots (their servos) powered? Gasoline? How is power stored, if it needs to be?
I'm on a Linux chromebook, and am using a Vex robot, and am wondering if there is an online IDE for RobotC that will run my computer?
There are a number of similar questions such as Monocular vs. stereo computer vision robustness for object detection, but none that address my question specifically. My weekend project is to build a little robot that can detect and track players and shots made in a basketball game. Detection and tracking need not be real-time (though it would be ideal). The goal is to understand if a stereo camera would help improve speed and accuracy of tracking players and shots made, or if a solo camera is sufficient. Would the depth information of a stereo camera simplify the task? Or would a solo camera, because of assumptions you can make about the basketball scene, be equally as accurate (and therefore preferable since less hardware is required)? Assume the camera must track activity at both baskets, and is 100 feet from the furthest basket (i.e., at the other end of the court). Specific Questions: Could a stereo system let you more quickly detect human bodies and basketballs (i.e., spheres with ~9" diameter) because you could detect volumetric shapes whereas you can't with one camera? Could shot detection be more accurate and faster because you can measure depth of the ball (i.e., only trigger analysis when ball is around same depth as hoop)? Would hoop detection be easier because of depth information? Obviously, stereo cameras require higher computational load at a nominal level, but could algorithm simplifications (e.g., ignore non-spheres for ball detection) allowed by depth information actually reduce overall computational load? Argument for solo camera: since the robot only operates against basketball scenes, you can make assumptions like there will be at least one 10-foot basketball hoop. Since you know the height of the hoop, would that allow you to perform depth measurements as if you had a stereo camera? The paper "Real-Time Tracking of Multiple People Using Continuous Detection" by David Beymer and Kurt Konolige suggests a stereo camera would offer advantages over a solo camera, confirming some of the hypotheses here, but the paper is also very old (1999). Is player & shot tracking better with a stereo camera, or are solo cameras equally as effective?
For some experiments, I need to have source code of a robocup 2d simulation team. Where can I find that? Is there any online test bed for this league for testing algorithms?
I am working on a robotics project for fun and can't really wrap my head around a solution to my perception problem. Setup: My robot will have a stereo vision setup and will have to detect certain objects and align itself to those objects in a certain pose. The robot will know what the width and height of those objects are. The robot will be using a tx1 for computation, so implementation needs to be pretty fast. Also, the environment's lightning will change a lot so using color for detection isn't a great option. My plan: To use convolutional neural networks to detect those objects of interest. I have been able to program a network to detect those objects in 2D however, I am stuck on how to detect the pose of the objects in 3D. My idea has been to detect the object with the neural network and once I have that region of interest, get the point cloud. Then fit a 3D model using ICP. Once the 3D model is fit I can get the pose of the object. I have also seen people using 3D correspondence grouping for this but would that work on a non dense point cloud that stereo vision generates? I am pretty novice in this area and would love to get advice on from some more advanced robotics practitioners. Thanks for you time!
I want to make a robot arm with a gripper that can go to any x y z coordinates near it. the robot itself will be simple just servo motors and Arduino but I don't know how to make reach the desired coordinates, any ideas?
I'm trying to control system that has 3 poles (2 in the right half plane) I sketched root-locus but the two positive poles are going right and never be in the left half plane at any value of k The state space of system is following A = 0 1 0 -3.20205979037663e-08 0 -31.1556786564355 83333.3331597258 0 -173598.323255178 B = 0 -3.11556786564355e-08 83333.3331597350 C = -0.999999999999890 0 0 D = 0 How can I control system like this? Note: I'm using matlab
I'm currently working on an autonomous underwater cleaning robot and would like your input on some navigation algorithms Problem: Wash the inside of big, open and filled tank, e.g. water storage tank (Figure 1), with an autonomous robot. The inside of the tank is divided in sections, such that the robot has a limited washing area. The washing are can be square, rectangular, parallelogram or other shapes. The inside of the tank may have unknown obstacles, e.g. inlets and outlets. Since the robot will function under water, there are a few limitations when it comes to sensors. Available sensors: Wheel encoders Distance measurements, e.g. ultrasonic, sonar, for obstacle avoidance Boundary detection Bumper with mechanical switch for collisions Pressure sensor Camera. Not necessary clean water so low visibility is an issue, i.e. difficult/impossible with visual odometry Non-available sensors: INS GPS Current solutions: Random walk within boundaries (Figure 2). Either driving in straight lines (Figure 2), spirals (Figure 7) or a combination of lines and spirals (Figure 6) Parallel swaths with 30% overlap (Figure 3). Requires cm precision on position estimate to guarantee coverage. Figure 4 shows an identical simulation but with one wheel radius 0.5mm larger than the other. This shows that it is not sufficient to only rely on wheel encoders for positioning as it will drift. With perfect positioning the parallel swath algorithm is 2x more efficient than random walk One possibility is to add an acoustic navigation system, but it would be too expensive. This problem is similar to the lawn mower and vacuuming robots. However, it seems like most of the products use random walk or a similar approach. Does anyone know a more efficient algorithm to cover the area based on the information provided? For all the simulations, the red line represents the robot’s movement and the black is washed area.  Looking at other similar questions I couldn't find the answers I've been looking for: How to localise a underwater robot? Robot wire follower + how to position on wire Working of Autonomous Lawn mower(ALM) in an unbounded area without a perimeter wire What algorithm should I implement to program a room cleaning robot? Figure numbering: 1 2 3 4 5 6 7 References: OmniClimbers: Omni-directional magnetic wheeled climbing robots for inspection of ferromagnetic structures
Let’s assume at time t a moving robot (e.g. PIONEER 3-DX) changes it’s steering angle by $25^{\circ}$. Obviously, in order to maintain stability and avoid overturning, the robot must reduce its velocity between the time interval $t$ and $t+1$. But the question is by how much?
So I was given a course assignment to assign frames and write D-H parameters for this robot using only 5+1 frames (with Frame $\{5\}$ at $P$ and Frame $\{0\}$ at $O$). And I assigned them like this: My question is: From Frame $\{1\}$ to Frame $\{2\}$, what are joint distances $a$ and $d$? The best answer I could get was 0. But obviously it should be zero for one axis and $a_1$ for the other. What's wrong? I have read a similar question here. But the answer points me to another method which is impossible for me. Edit: No matter I put $a_1$ in $a$ $$(\alpha,a,d,\theta)=(-90^\circ,a_1,0,\theta_2-90^\circ)$$ or in $d$ $$(\alpha,a,d,\theta)=(-90^\circ,0,a_1,\theta_2-90^\circ)$$ The joint distance $a_1$ does not appear in $z$. What it gave out is $$\left( \begin{array}{cccc} \sin{\theta_2} & \cos{\theta_2} & 0 & a_1 \\ 0 & 0 & 1 & 0 \\ \cos{\theta_2} & -\sin{\theta_2} & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right) \text{ or } \left( \begin{array}{cccc} \sin{\theta_2} & \cos{\theta_2} & 0 & 0 \\ 0 & 0 & 1 & a_1\\ \cos{\theta_2} & -\sin{\theta_2} & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right)$$ Obviously, $a_1$ should appear in $Z$-translation instead!
I would like to use and test this type of servo motor, significantly stronger than the ones I've been using previously. However the datasheets seem to be limited and I can't exactly tell how to use the driver. Would it be fair to assume that to use this system, I would get say a 500 W (or higher) power supply and be able to plug into my (North American) wall outlet to use this motor? And if so, would that give me enough power? I believe NA wall outlets give something like 12 A @110 V max, or 1.32 kW. How could I tell if this is enough for the above servo, or might I have to upgrade to using an industrial power line? What if I want to power multiple of these motors at the same time? Surely 1 power outlet wouldn't work. [semi-related bonus question] What is the difference between "2 phase", "3 phase" etc in this stepper/servo motor?
Not exactly a robotics based question but mechanics is involved. I have a wearable device that gives output in Quaternions which I can read serially via Labview. My task is to develop a threshold based fall detection system based on these values which I am not familiar with. Here is a sample data I read from the device id: 4 distance: 1048 q0: 646 q1: -232 q2: -119 q3: 717 I was able to find the Euler angles from the quaternions. I obtain a Rotation matrix from the Quaternion. From the rotation matrix, I derive the Roll, Pitch and Yaw. The coordinate system is North-East-Down. But my Pitch angle remains at positive or negative 90 degree. The fact is I didn't write the conversion code . I am attaching the . Please have a look at the code and help me if you could
Take a look at the following simple example robot arm: I want to know the torque required on that bottom motor to rotate the arm. Since it's not exactly rotating directly against gravity like the other joints I'm not sure how to analyze. Assume we know the mass of every part of the robot arm and the distances to the base. For reference, I am planning on securing my base rotation motor's shaft through a ball bearing rotary table to the rest of the arm. I'm considering the torque required on that motor to properly pick the right component and make sure I get enough speed as well. So understanding how to analyze the forces would really help!
In Neal Stephenson's novel "Snow Crash", a robotic/cybernetic creature known as a "Rat Thing" looses its bounds and runs to rescue another character. The Rat Thing is portrayed at moving over 700 mph and causing sonic shock waves. Can such a speed be reached by purely mechanical means?
I'm attempting to do a quick and dirty autonomous path with the create2. I'm using the tethered driving program seen here. I set my own buttons just to make it rotate 90° and go forward one "pulse". I'd like to know, does anyone have any ideas on how to trick the attached Create2_TetheredDrive.py into thinking it's seeing a series of keyboard entries?
ToF (time-of-flight) cameras seem susceptible to bright outdoor conditions. Are there any sensors made for bright outdoor conditions and could be embeddable in a robot that could detect and/or track small objects (5cm - 25cm) with a range of 10m - 100m? Would radar work?
I have started to work on a little project of mines, which consists in implementing the stabilization a single axis gimbal using Arduino. The gimbal is driven by a sensorless three phase brushless DC (BLDC) motor, while on it's shaft there is a generic payload provided with an IMU board (3 axes gyros + 3 axes accelometers), which can give feedback to the Arduino about the angular rates and accelerations. I have googled a bit about this topic and there are so many solutions out there, the only thing I really do not understand is about the control of the BLDC motor. Can I use a sensorless control of the motor, by sensing the back EMF even if the motor is spinning very low? How can I energize properly the phases of the BLDC motor if it is sensorless? Can I use the IMU for finding out how to spin the BLDC motor properly without counter rotations? Could you give me any help, please?
I'm making a Hexapod robot (this is my first robotics project ever) and I'm having a hard time deciding the orientation of the servo motors on the COXA section. The hexapod I'm making is based on this with MG996R servo motors The problem is that all the Hexapod assembly guides I've read (including the one above which I'm using) say that the servos on the coxa link have to be installed in a way that the Zero angles (the angle the servos move to automatically when they're plugged in) of the servos should point to look like this from above: (the red circles are the servos in question) I understand that the reason for setting up the zero angles of the coxa servos this way is so that the Hexapod can directly stand up when it is powered up (since the motors go to their zero angles on powering up). Now here's the problem: the servos can only move forward from the zero angles and then back to the zero angles in COUNTER-CLOCKWISE direction and do not go backwards from the zero angles i.e not move in the opposite direction. Knowing this, I don't understand how I'm supposed to install the servos like in the picture since doing that would mean the motors on the LEFT side would not be able to turn in the clockwise direction for the tripod gait. Can anyone please explain me how I'm supposed to get the motors moving in clockwise direction for the tripod gait while keeping the zero angles set in the directions as shown in the picture?
I want to program an arduino using matlab by "arduino support package" and I want to use simulink in normal input output operations but also use Matlab language in another part so is it possible to make a code consist of simulink model and matlab language and add this code to arduino ? and if it possible how to make it ?
In cnc machine, programs change G-code to commands to stepper motors using parallel port. I want to know what is the G-code and how can it be converted to stepper motor commands? The programs doing this are not open-source, So can I find open source project doing the same?
Is there an official documentation from Kuka that explains at a beginner level how to start programming Kuka Robots equipped with KR-C4 controllers?
For some important reasons the plugged cable 1 must not be on at the same time as the the plugged cable 2. It must be idiot proof... I'm talking about regular house power north america. What is the specific name of it. I've done some research and I can't find anything. I thinking about crafting the thing with 2 powerbar and a custom button, but I'm looking for something less homemade. Thanks
I am using my RF module to send a High signal to relay but just once by a push button.As soon as I release it the Relay goes to NC . How can I Fix it to stay at NO . I thought of using a Flip-Flop.
Is there a sensor that can measure through an object, for example a mattress. (This is not what I want to do but it is a good illustration) I want to mount a sensor on the ceiling above the top bunk of a bunk bed and want to measure the presence of a person on the bottom bunk (when no one is in the top bunk). I have read about thermal and ultrasonic sensors however it does appear that they would be able to measure on the other side of a mattress.
I'm learning kuka krc robot language, so far so good, and am wondering if there is an IDE for writing the code i want to program to the robot, something like giving me suggestions if I mistype a variable that suggest me methods available in the kuka language maybe let me debug the code etc etc any option?
How can the maximum length and mass of the linkages of a RRR type robt arm be calculated, if the motor's mechanical characteristics are given?
I have a Sharp GP2D12 and I've been using it with my Arduino just fine. I have however had some experience in the past where this sensor (which is analog) was fed into an ADC (like ADC0831) to go to a Basic Stamp. I was wondering what the purpose of this is given that both the Arduino and the Basic Stamp support analog inputs if I am not mistaken, thus it seems weird having an extra and necessary link in the chain. Does the ADC provide more resolution? What is the point of it in this circumstance?
I'm using ROS indigo in my project and all the nodes and visualisation (Rviz) seems to be functional when I launch the program using roslaunch. Here are the sensors used in the scooter: Hokuyo Lidar Phidgets Encoder However, when I start moving the scooter (the robot) using the joystick, the scooter will move physically, but in Rviz, the object detected by the lidar moves to the scooter, instead of the scooter moving in the map. Furthermore, the costmap obstacles will get smeared as I continue to move the scooter forward. Where could I have missed out? Here are the node connections: A screenshot of the tf tree: EDIT: When I did roswtf, it shows that the robot footprint is disconnected, though the rqt graph shows otherwise. Here is the log of it: Beginning test of your ROS graph. These may take a while... analyzing graph... ... done analyzing graph running graph rules... ... done running graph rules running tf checks, this will take a second... ... tf checks complete Online checks summary: Found 1 warning(s). Warnings are things that may be just fone, but are sometimes at fault WARNING The following node subscriptions are unconnected: * /rqt_gui_py_node_646: * /statistics * /tf_static * /amcl: * /tf_static * /rviz_1476945491120458509: * /tf_static * /map_updates * /move_base * /tf_static * /move_base/TebLocalPlannerROS/obstacles * /move_base/cancel * /robot_pose_publisher * /tf_static Found 1 error(s). ERROR The following nodes should be connected but aren't: * /move_base->/move_base (/move_base/global_costmap/footprint) * /move_base->/move_base (/move_base/local_costmap/footprint) Please advise on the possible ways that I can undertake to debug this issue.
Background: This question is about simulating one ErleCopter and one ErleRover simultaneously for a non-commercial research. I would like to have the quadcopter follow a rover, which in turn is to be tasked with following a line. I am trying to spawn the vehicles in Gazebo and control them using MAVProxy. The Problem: Any time I try this, I run into one of two problems: Spawning the second vehicle terminates the first MAVProxy instance, or The second vehicle spawned cannot be linked to the second instance of MAVProxy. I'm not sure what to do about this because I am not sure if this problem is one problem or if it's composed of two sub-problems. The first problem is in spawning the robots and the second one is in controlling both independently (and obtain the state parameters of both vehicles and use that as feedback). I believe a contributing factor to this problem is that I'm trying to do the simulation and both MAVProxy instances on one computer, a Lenovo-y50-70 is being used with Ubuntu 14.04. Two computers are not easy to obtain immediately and there are network stability issues where I am. The Question: The entire question probably reduces to "How to link second robot spawned by rosrun to second MAVproxy instance?". Desired Outcome: I would like help either getting the simulation to run as desired (two vehicles co-simulated in one virtual world with two MAVProxy instances, one linked to each vehicle), OR official documentation somewhere that this is not possible. What I've Attempted: Initial attempts can be seen in earlier edits of this question. For clarity, that information has been removed, but again, if interested, see an earlier version of this question. Fast-forwarding to the relevant: The second robot has been spawned successfully as mentioned in this video by using the commands mentioned below. cd path_to_urdf_model_files rosrun gazebo_ros spawn_model -file rover.urdf -urdf -model rover_object Note: ROS Indigo is different; gazebo_ros must be used instead of gazebo_worlds The weird behaviour of the robot rotating about itself is probably because of MAVproxy; I have experienced this before. Attempt to establish separate network connections to Copter and Rover has been successful so far. The original structure is as shown below: rover_circuit.launch -> apm_sitl.launch -> node.launch (node name: "mavros") The current architecture is as shown below: copter_circuit.launch -> apm_sitl_copter.launch -> node_copter.launch ("mavros_copter") rover_circuit1.launch -> apm_sitl_rover.launch -> node_rover.launch ("mavros_rover") rover_ciruit1.launch is as shown below: <launch> <include file="$(find mavros)/launch/apm_sitl_rover.launch"></include> <arg name="enable_logging" default="true"/> <arg name="enable_ground_truth" default="true"/> <arg name="log_file" default="rover"/> <arg name="tf_prefix" default="$(optenv ROS_NAMESPACE)"/> <arg name="model" default="$(find ardupilot_sitl_gazebo_plugin)/urdf/rover.urdf"/> <param name="robot_description" command=" $(find xacro)/xacro.py '$(arg model)' enable_logging:=$(arg enable_logging) enable_ground_truth:=$(arg enable_ground_truth) log_file:=$(arg log_file)" /> <param name="tf_prefix" type="string" value="$(arg tf_prefix)" /> <node name="spawn_rover" pkg="gazebo_ros" type="spawn_model" args="-param robot_description -urdf -model 'rover' " respawn="false" output="screen"></node> </launch> This is the minimal launch file, and it works. I had thought of rosrun apm_sitl_rover.launch followed by rosrun rover.urdf, but I have been unable to find a suitable package which launches apm_sitl_rover.launch directly. It is easy to roslaunch a launch file that has an method appended. Naming issues and other network errors have been resolved. Outstanding Problem Remaining: I'm still having issues launching and linking the second vehicle, but now it seems like I've narrowed it down such that the only problem is with UDP bind port, which is 14555 by default, and this is crashing Gazebo for the second instance because the second instance is using the same bind port. It looks like libmavconn is getting called somehow, particularly interface.cpp which has url_parse_host(bind_pair, bind_host, bind_port, "0.0.0.0", 14555); and the udp.h included in interface.cpp has a function MAVConnUDP() which has bind_port=14555, and this has resulted in "udp1: Bind address: 14555" and "GCS: DeviceError:udp:bind: Address already in use". Trying to assess the connection between sim_vehicle.sh, libmavconn, and Gazebo, I was able to figure out that sim_vehicle.sh calls mavproxy.py in one of the ending lines which in turn uses pymavlink. I have been unable to find further relationship currently. Leading Questions / An Approach to the Solution As I have a strong intuition that this is the final stage, I currently resolve to fix this by using interface_copter.cpp and interface_rover.cpp. I think, if I could get answers to the following questions, I can work out where the failure is in successfully launching and link the second (or subsequent) vehicles: How does sim_vehicle.sh trigger the libmavconn package and ultimately Gazebo? Is there a software architecture diagram which describes the complete structure from sim_vehicle.sh to joints and controllers?
I have built a fighting bot but improvements could be made. I currently use 1 channel to switch spinners on and off and another for self righting. The spinners cannot turn during self righting as they are against the floor so I have to switch them off with one switch and activate self righting with another to avoid the spinner motors burning out and then once self righted reverse both switch positions again. Currently each circuit has a normally off relay (5v from the receiver controlling 24V to load), in an ideal world I would have got a relay that allows one circuit 'on' and the other 'off' and then the opposite when given the signal by the receiver e.g. when a single switch on the remote control is 'off' then circuit A is 'off' and circuit B is 'on', when the remote switch is 'on' circuit A in 'on' and circuit B is 'off' - this would free up a remote channel and also ensure the two circuits can never be closed at the same time - with me so far? anyway turns out that this type of relay does not exist for control by a remote receiver so what I am trying to achieve is the following: So it's not actually a relay as that suggests one voltage controlling another so what is this illusive gizmo I seek actually called? Can I buy one on Amazon? I'm trying to avoid going down the route of IF / THEN gates on a separate PCB, the daft thing is that all I want is an electrical version of exactly what the pneumatic actuator does - when powered air goes down one hose and when off goes down another. Thanks in advance. After looking at some of the answers - (thanks everyone) it allowed me to find this: from http://www.superdroidrobots.com/shop/item.aspx/dpdt-8a-relay-rc-switch/766/
I'm building a 2-wheel balancing robot from old parts. DC motors from an old printer, wheels from a BBQ, bodywork from an old optical drive etc. The brain of the robot is a Raspberry Pi 3. I'm using an L298N motor driver to control 2x 12-35v DC motors. Balance and movement will be 'sensed' using a 10DOF L3GD20 LSM303D Gyro, Accelerometer & Compass board. I'm currently using an Arduino with PWM for multi speed and direction of each motor. A PID loop will keep the robot balanced. It looks like this: Raspberry Pi > Arduino > L298N > Motors So far everything is good (don't worry I'm not going to ask "Where do I start??" :D) My question is this: should I continue to control the sensors and motors' driver using the Arduino, having the Pi issue higher level commands to the Arduino, OR should I just let the Pi do all the leg work? I'm trying to keep power consumption to a minimum but I want to keep the processing overhead of the balancing and movement away from the Pi. Additionally, I've read that an Arduino is better at this sort of work due to the fact that it has a built in clock, where a Pi doesn't. I assume the PID loop will be slowed if the device (the Pi) is working on other processing tasks like navigation and face recognition etc. Are my assumptions correct and which direction would guys steer towards? Your knowledge and wise words would be very much appreciated!!
I just finished assembling my new 250 quad which is equipped with an SP F3 Flight controller. I think that I kinda bricked my FC in the first 10 minutes of configuring it: I first plugged it in, installed a few drivers and opened cleanflight Cleanflight recognized it, no problem for now. I then tried to test the RX channels and discovered that apart from the 4 AUX channels, none of them worked, I swapped the plugs on the transmitter and saw that the RX was OK, it's the FC that didn't show anything on the first 4 channels After reading some stuff about upgrading the firmware I did something completely wrong, I followed the first 10 seconds of this guide, using baseflight... though my board isn't baseflight it's cleanflight, and I did the mistake of clicking the "Flash firmware" button which then made it impossible to use it and flash it. After some trial and error I finally had a problem where cleanflight tells me that it cannot flash the firmware on it because "Failed to open serial port" I used Process Explorer to check if any program was using and holding the COM3 port but no No Program was using it. Thank you for reading this very long and probably stupid question, do you have anything that I can try?
My task: I have a task where I am asked to track parcels(carton boxes) of different dimensions moving on a conveyor. I am using Asus Xtion pro camera mounted on top of a conveyor in any inclined angle. I am looking for a model free object tracker that will detect boxes in the scene, track them & gives their 6 DOF? My target object is just a box and I want to eliminate all other things in the scene. My approach: I do Point cloud pre-processing like down-sampling, pass through filtering and segmentation. All these should give me a final point cloud containing only the objects on the conveyor. I planned to make the "z" values in each point(depth value) as zero, thereby making the point cloud of the box to be flat on the ground. I planned to transfer the view of the camera from any inclined position to a top down view so that I can view any number of carton boxes moving on the conveyor from a top down view. I feel the top down view will prevent perspective viewing problems The process flow of step 2 and 3 is shown below. After the top down view of the point cloud is achieved, I need to convert the 3rd point cloud to 2nd image, so that I can perform object tracking with so many OpenCV based tracking algorithms available. A Sample point cloud is shown below in different views Original View from camera: Point Cloud View 1: Point Cloud View 2: Point Cloud Target/Desired View for converting to 2nd: (The box is the target. All the ground plane and unnecessary points would be eliminated) Is my approach correct? How will I achieve steps 2,3 and 4?
I've recently started working on some localization algorithms like probabilistic road map and SLAM algorithms. What I'm looking for is a software that would help me in simulation for such algorithms. I started with pythons graphics.py and have now started working with Gazebo and ROS, but I've found it complex to use. Is there any similar freeware simulation software that is easy to setup and work with, thus allowing me to reduce my time stressing over the simulation part and working more on the algorithms?
There are high quality software robotics simulators like Gazebo available today. What is the difference between a pure software simulation and a real world (say RC) scale model simulation? Is it possible to skip the scale model simulation and only do SW simulation and then build a full scale final product right away? Does scale model simulation have any advantages over say Gazebo? I don't have any direct experience with developing a product in robotics but if I try to think then I guess the SW simulation may primarily be used to develop the very basics of a product and then scale model simulation may take over or complement SW simulation. My personal view is that any (even small scale) real world simulation/testing is beneficial because the features of real world (noise, dust etc.) can hardly be simulated in SW. Is this true? Also small scale model will be many times cheaper than full scale one. I am considering an autonomous car (self-driving) as one of the possible products resulting from such simulation. I read that vision is one of the weak parts of SW simulators. I can think that sensing in general may be a weak part of SW simulators since any real sensor is imperfect and noisy which Gazebo may not take into account...
I'm trying to use motor-sizing tool developed by oriental motor to choose good servo motor for my cnc The tool requires breakaway torque of my screw as input, I searched online but I got people measuring it using wrench. I'm working on simulating the machine before I buy anything so I don't have the screw to test anything on i. This is the screw I want to use and all the specs are available in the link, I need to calculate the breakaway torque mathematically so is there any way?
I'm working on a Kalman filter for estimating the position of a point in 3D space. I know that I can measure its 3D position directly with a variance of about 2 mm (in other words: the variance of the norm of the measured x, y, z vector is about 2 mm). I'd like to fill my measurement noise covariance matrix based on this, so my question is: How does this relate to the variance of the individual x, y, z measurements? I'm looking for three equal variances, assuming independency.
I'm trying to develop a platoon leader follower formation for two robots in Matlab. The paper I'm trying to follow is this. I've got next code, where I want follower robot to follow the leader robot's path, just in a very simple way, no kinematics. But I cannot get it. Does anybody know which is my error? x1=linspace(0,10,100); //x1 and y1 represent the leader's path y1=sin(x1); plot(x1(1:100),y1(1:100)); hold on; x2=zeros(1,100); y2=zeros(1,100); x2(1)=-5; //x2, y2 represent the follower's position x2(2)=-4; landa=0.1; //represents the euclidean distance between robots theta_leader(2)=atan((y1(2)-y1(1))/(x1(2)-x1(1))); theta_follower(2)=atan((y2(2)-y2(1))/(x2(2)-x2(1))); alfa(2)=atan((y1(2)-y2(2))/(x1(2)-x2(2)))-theta_follower(2); phi(2)=pi-(theta_leader(2) - alfa(2) - theta_follower(2)); for i=3:100 landa(i)=0.1; x2(i)=x1(i)*cos(theta_leader(i-1))-landa(i)*cos(alfa(i-1)+theta_follower(i-1)); y2(i)=y1(i)*sin(theta_leader(i-1))-landa(i)*sin(alfa(i-1)+theta_follower(i-1)); theta_leader(i)=atan((y1(i)-y1(i-1))/(x1(i)-x1(i-1))); alfa(i)=atan((y1(i)-y2(i))/(x1(i)-x2(i)))-theta_follower(i-1); phi(i)=pi-(theta_leader(i) - alfa(i) - theta_follower(i-1)); theta_follower(i)=phi(i)-alfa(i)+theta_leader(i)-3.1415; end plot(x2,y2,'or');
So, I am working on building a simple self driving tank (small) that needs to navigate a large hall. I plan to use ultrasonics, LIDAR and a Kinect. I am pretty happy with how I will build all of this. My main question is would this be easier to do in ROS or write it in Python. I have very basic knowledge of ROS but have been programming for many years (Java, Objective C etc). I assume I will need to load in a basic map of static objects / floor plan. Use SLAM etc (which I see is possible in Python). Sorry if this is a vague question. My hoping is someone on here who has used ROS a lot will turn round and say its the way to go
I want to code the dynamics of 2D planar quadrotor and than control it to drive it from one state to another. Dynamics that I use is taken from the online course fiven by Vijay Kumar in Coursera as follows, $ \begin{bmatrix} \ddot{y}\\ \ddot{z}\\ \ddot{\phi} \end{bmatrix} = \begin{bmatrix} 0\\ -g\\ 0 \end{bmatrix} + \begin{bmatrix} -\frac{1}{m}sin\phi & 0\\ \frac{1}{m}cos\phi & 0\\ 0 & -\frac{1}{I_{xx}} \end{bmatrix}\begin{bmatrix} u_1\\ u_2 \end{bmatrix} $ it has some linearizations also as $sin\phi->\phi$ & $cos\phi -> const.$ And u1, u2 is defined by; $u_1=m\{g+\ddot{z}_T(t)+k_{v,z}*(\dot{z}_T(t)-\dot{z})+k_{p,z}*(z_{T}(t)-z)\}$ $u_2=I_{xx}(\ddot{\phi}+k_{v,\phi}*(\dot{\phi}_T(t)-\dot{\phi})+k_{p,\phi}*(\phi_{T}(t)-\phi))$ $\phi_c=-\frac{1}{g}(\ddot{y}_T(t)+k_{v,y}*(\dot{y}_T(t)-\dot{y})+k_{p,y}*(y_{T}(t)-y))$ it is assumed to be the vehicle is near hover condition and commanded roll angle $\phi_c$ is calculated based on desired y-component and is used to calculate u2 which is net moment acting on CoG. The thing that I dont understand is, don't I need any saturation on actuators? Do I need to implement some limiting part on my code to limit the control signals. The other thing is, I don't have any desired acceleration. There is those terms in control signal equations. Can I remove them? The last thing is, my control signals creates some signals causes to vehicle to have order of 10^5 in roll angle by integrating the high angular rates caused by high u2 moment signal I guess. Since the linearization works on small angle approximation, those high angles and rates are problematic. Thus how can I handle it?
I've implemented fast slam using landmark detection and the map stored is a feature map, made of landmarks positions. I would like to create a grid map, and my questions are about how does the robot create a grid map in slam: Another landmark class is used, or the occupancy grid is the landmark itself? In other words, the grid map generation is made separatly of the feature map? About the alignment of the map with the previous maps measures, this is similar to the fast slam 2.0 due to the fast that considers the robot pose and the measurement of t-1? Thanks in advance
I'm entirely new to robotics and have the following design requirement for a project I'm working on: Given a robot capable of positioning an object in three dimensional space ( vis. a plastic extruder or syringe, etc ), allow the robot's end user to manually move that object through a path so that the robot's motor/controller will output the path taken. I'm sure there must be motor-controller packages with such a feature, but I have no idea what keywords to search for so as to learn more about this type of hardware. The basic idea is to allow the end user to give the robot instructions by manually moving the object through a procedure, then recording that procedure, manipulating it and playing it back. Can anyone give me some keywords to get me started in my research?
I want to use a gripper end-effector (3 fingers, single actuator tendon-driven with force sensors in fingers) to grip and hold fragile objects such as an egg. I can not seem to figure out which control scheme would be most appropriate for the situation. Would using an impedance control with negligible dynamic interaction be effective or a hybrid control with force/position control? My idea was to simply tense the tendon (slowly) encompassing the object until the force sensor gives a feedback that contact has been made (+ a little grip force), what category of control scheme would that fall into?
I am currently working on a project that involves structure from motion using multiple cameras on multiple aerial vehicles (each vehicle has a monocular camera: think of it as a distributed stereo), and I am trying to extend this to include localization as well. My pipeline currently goes: robots at known locations -> take pictures -> reconstruct. When it comes to localizing the vehicles as well using this incrementally built map, the standard approach that comes to mind is to apply the PNP algorithm on each camera (assuming the reconstructed scene is visible to all cameras) which results in the 3D pose: but this doesn't necessarily take advantage of the fact that multiple cameras exist, apart from the fact that they are used in reconstructing the environment. Is there anything I can exploit using multiple cameras/vehicles that would result in enhanced localization accuracy of all of the vehicles as compared to a "single vehicle performing PNP on a known map" scenario?
Over the summer, I have configured ROS navigation stack on a mobile robot (with radar and Kinect) so that it can autonomously navigate in the unknown environment. I also wrote a python program that allow the robot to track human motion, also using the open source library. Currently, I am applying to software job. I thought this experience is very relevant to software programming. But, I am actually stuck on how to explain what ROS is. And when I use the packages (eg, navigation stack) on a robot, am I actually doing coding? Can I say ROS is just an API?
For a research project at university I have been tasked with simulating a "robot arm". The simulation is to be compared with the real life version for accuracy. The arm will be lifting other objects and building simple structures in the demonstration. My supervisor has asked that we build this simulator from scratch so I am currently selecting a physics engine. In this link, it seems bullet is not accurate enough, albeit this was in 2010, for the original poster's needs in robotic simulation. This comparison believes bullet is better in general but says that documentation is lacking, which is important to me as this is my first time using a physics engine such as this. so thoughts on these or any other physics engine that may be more suitable?
Between the shoulder and elbow pitch joints, I see two types of connecting structures on larger robots. I've attached a picture. My questions: 1) What are each of the respective mechanical parts called? 2) What is their purpose for the robot arm? Thanks!
I recently engaged with a university robotics project (based on ROS) and my main processing controller is Raspberry Pi and the system stability controller is Atmega 32 microcontroller (It is responsible for driving motors and check the communication protocols ex:-i2c,rs232 are working in good manner). So the motor controller of this robot is a i2c type one and and it drives the motors according to the i2c signals that coming from the i2c port of the Atmega 32 microcontroller. The main controller communicate with the Atmega 32 using rs232 protocol. So I found an arduino code as below, // This function is called once when the program starts. void setup() { // Choose a baud rate and configuration. 115200 // Default is 8-bit with No parity and 1 stop bit Serial.begin(115200, SERIAL_8N1); } // This function will loop as quickly as possible, forever. void loop() { byte charIn; if(Serial.available()){ // A byte has been received charIn = Serial.read(); // Read the character in from the master microcontroller Serial.write(charIn); // Send the character back to the master microcontroller } } The communication between processing board and microcontroller as below diagram There is an already available arduino library called rosserial for ROS. But I want this in AVR GCC.What I want is convert this code to traditional avr gcc code that work on atmel studio 6
I am currently doing my first arduino project and i am having trouble finishing it. I have a 3s lipo battery connected with an esc (120A) that is connected to a motor (270KV). From the esc i am connecting two jumper cables that goes to GND and pin 9. I do not have a jumper cable on the red wire from the ESC. This is how it looks: Below you can see a link to a sketch that I found online. The only difference compared to my schematic is that I have an Arduino Uno. When I insert the battery and switch the ESC to "on" the ESC starts up correctly and the fan starts to go. But the arduino does not get any power. It is still "OFF". I also noticed that my motor has 4 cables. 3 "bigger" cables that goes to the ESC's 3 big cables. And then a 4th one hanging loose right now because i do not quite know what to do with it. I also noticed there is a hole in the ESC where I can insert this. The hole has 6 "inputs" however where as the loose wire from the motor has 5. Therefor I am a bit concerned if that should be connected there or not. So to summarize, the problem is that the Arduino does not turn "ON" with my current schematic. Any help, tips is very appreciated!!
I have a hexapod that a friend and I built this summer, but there is a big problem whenever we try to move multiple servos. When using the Adafruit_Python_PCA9685 library, I am able to move the servos perfectly fine for a short period, but then they will breakdown and start erratically twitching. To illustrate the problem, I just modified a few lines from Adafruit's simpletest.py program. Here is the code: # Simple demo of of the PCA9685 PWM servo/LED controller library. # This will move channel 0 from min to max position repeatedly. # Author: Tony DiCola # License: Public Domain from __future__ import division import time # Import the PCA9685 module. import Adafruit_PCA9685 # Uncomment to enable debug output. import logging logging.basicConfig(level=logging.DEBUG) # Initialise the PCA9685 using the default address (0x40). # pwm = Adafruit_PCA9685.PCA9685() # Alternatively specify a different address and/or bus: pwm = Adafruit_PCA9685.PCA9685(address=0x40, busnum=2) # Configure min and max servo pulse lengths servo_min = 300 # Min pulse length out of 4096 servo_max = 400 # Max pulse length out of 4096 # Helper function to make setting a servo pulse width simpler. def set_servo_pulse(channel, pulse): pulse_length = 1000000 # 1,000,000 us per second pulse_length //= 60 # 60 Hz print('{0}us per period'.format(pulse_length)) pulse_length //= 4096 # 12 bits of resolution print('{0}us per bit'.format(pulse_length)) pulse *= 1000 pulse //= pulse_length pwm.set_pwm(channel, 0, pulse) # Set frequency to 60hz, good for servos. pwm.set_pwm_freq(60) print('Moving servo on channel 0, press Ctrl-C to quit...') while True: # Move servo on channel O between extremes. for i in range(0, 3): for j in range(0,3): k = (4*i)+j pwm.set_pwm(k, 0, servo_min) time.sleep(1) pwm.set_pwm(k, 0, servo_max) time.sleep(1) And here is a video of the "erratic movement" (the first 8 seconds are normal movement) I am running the code on a Beagle Bone Green Wireless with ubuntu on it and I am using turnigy TGY-S091D servos. Here is a photo of the wiring I don't have enough reputation to post more detailed pictures, but hopefully this is enough. Please help.
I currently have an error state Kalman filter with the state vector $(p, v, q, \omega, a, g)$ where $q$ is the quaternion orientation. I would like to add the information coming from a magnetometer to this sensor fusion. I have calibrated the magnetometer and we can assume that we are getting processed data at the point of input to the filter. How do I extend my state vector to account for the new input, or since I do not directly care about estimating it, should I not include it? I think that I can initialize my initial state vector correctly by performing TRIAD using the magnetic field vector, is this the right approach? How does the magnetic field vector help in stabilizing my quaternion attitude? I tried to search around but I didn't find many resources on how the math works when I include the magnetometer. Any links would be very helpful as well.
I am working with a Kinetix 5500 servo drive with a rotational actuator to locate a keyway on a shaft. The shaft will rotate, and a laser sensor will detect the height difference when the keyway passes in front of the sensor. I need to record the position of the keyway so I can orient it to a known position. The laser sensor will be wired to Registration Input 1 (see pages 62 and 65 of user manual above), to capture the position of the servo more accurately than using a sensor wired in to the PLC's I/O. I have been able to find little information about how to actually use a registration input in my PLC program. I know there is some sequence that is required to arm the registration input to look for a transition, and then grab the saved servo position from the drive. How does this process work? Is there documentation from Rockwell that I'm not finding? (I found a programming manual for PowerFlex, but not Kinetix) Additional information: the PLC is an AB ControlLogix, and the drive is connected via Ethernet. I am familiar with setting up, tuning, and issuing Motion Direct commands for Kinetix drives. Related nonexistent tags: Allen-Bradley, Kinetix, ControlLogix
I am working on a setup from 2008 with a 6 axis USB motion controller and 4 vsd-e drives very similar to the setup shown here. The motion controller board is exactly the same as in the first post of these two threads, it also says "USB-SPI 6 axis Rev 0" on the PCB: Motion controller board example 1. Motion controller board example 2. I have installed GDtool and SimpleMotion library. When trying to connect to the device with GDtool, the first step produces the following output in the event log: USB mode configured. Enabling configure mode. Shell command: OPEN NORMAL Running OPEN... Normal mode enabled > Got input from SPI shell Saatiin viesti: 0 Parametreja: 2 0 USB mode configured. isBootloaderMode The second step fails with the message "Connection failed. Please check connections." and the following output in the event log: Configuring connection Shell command: OPEN Running OPEN... cDevice::sendCommandOnly( 2, 0 ) 0x200008a -> 0x0 cDevice::sendCommandOnly( 2, 0 ) 0x200008a -> 0x0 cDevice::sendCommandOnly( 2, 0 ) 0x200008a -> 0x0 Connection failed. > Got input from SPI shell Saatiin viesti: 0 0 Parametreja: 3 0 setConnectionStatus connected Connection failed 0 setConnectionStatus connected Connection failed I played around with the example program "SimpleMotionTest" and "FT_Prog" by FTDI to manipulate the USB-controllers Product Description string. The best I could do with SimpleMotionTest was "Communication error. Possibly drive not in SPI mode." It seems that the axis names are given by the USB Product Description string, because if axis name in SimpleMotionTest and the descriptor string do not match, it says "USB device with given axis not found." This makes sense with the Granite Devices tuning cables but not with a 6 axis controller which can only have one Product Descriptor string. Is it possible to configure the USB6AX with GDtool? Is it possible to control it with the SimpleMotion library? If yes what am I doing wrong and if no what is the suitable configuration utility and how can I interface the controller with my LabView/C++/... application? If anyone is out there who still uses the same controller thanks you for sharing you experiences!
I am working on implementing a Kalman filter for position and velocity estimation of a quadcopter using IMU and vision. First I am trying to use the IMU to get position and velocity. In a tutorial [1] the process model for velocity estimation using IMU sensor data is based on Newton's equation of motion $$ v = u + at \\ \\ \begin{bmatrix} \dot{x} \\ \dot{y} \\ \dot{z} \end{bmatrix}_{k+1} = \begin{bmatrix} \dot{x} \\ \dot{y} \\ \dot{z} \end{bmatrix}_{k} + \begin{bmatrix} \ddot{x} \\ \ddot{y} \\ \ddot{z} \end{bmatrix}_k \Delta T $$ while in the paper [2] the process model uses angular rates along with acceleration to propagate the linear velocity based on the below set of equations. $$ \begin{bmatrix} u \\ v \\ w \\ \end{bmatrix}_{k+1} = \begin{bmatrix} u \\ v \\ w \\ \end{bmatrix}_{k} + \begin{bmatrix} 0& r& -q \\ -r& 0& p \\ -p& q& 0 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \\ \end{bmatrix}_{k} \Delta T+ \begin{bmatrix} a_x \\ a_y \\ a_z \\ \end{bmatrix}_{k} \Delta T + \begin{bmatrix} g_x \\ g_y \\ g_z \\ \end{bmatrix}_{k} \Delta T $$ where u, v, w are the linear velocities | p, q, r are the gyro rates while a_x,a_y,a_z are the acceleration | g_x,g_y,g_z are the gravity vector Why do we have two different ways of calculating linear velocities? Which one of these methods should I use when modeling a quadcopter UAV motion? [1] http://campar.in.tum.de/Chair/KalmanFilter [2] Shiau, et al. Unscented Kalman Filtering for Attitude Determination Using Mems Sensors Tamkang Journal of Science and Engineering, Tamkang University, 2013, 16, 165-176
Must the operation of a PLC (programmable logic controller) be halted while its logic is being changed or can new logic be downloaded to it during runtime, without memory loss, giving a seamless transition from one operation cycle to other?
I have short question about the alexmos gimbal controller. The Controller receives the gyroscope and accelerometer sensor data from the IMU, that is mounted on the camera. In optimal case, the camera should stay in perfect position, which means, that there would be no gyroscope data, since there is no movement. So the only data for positioning would be the accelerometer. Is there a second IMU onboard that receives the gyrometer data? All the gimbal controllers from aliexpress seems not to have an IMU onboard, but it that case the controller can only use the accelerometer, right?
I have an RRR planar robot: Its forward kinematics transform is: $$ {}^{0}T_3 = \\ \left[\begin{array}{cccc} \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right) & - \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right) & 0 & \mathrm{l_2}\, \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2}\right) + \mathrm{l_1}\, \cos\!\left(\mathrm{\theta_1}\right) + \mathrm{l_3}\, \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right)\\ \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right) & \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right) & 0 & \mathrm{l_2}\, \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2}\right) + \mathrm{l_1}\, \sin\!\left(\mathrm{\theta_1}\right) + \mathrm{l_3}\, \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right)\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{array}\right] $$ With the joint parameters $q = \left[\begin{array}{ccc} \mathrm{\theta_1} & \mathrm{\theta_2} & \mathrm{\theta_3} \end{array}\right]^T$ and the end-effector position $X = \left[\begin{array}{ccc} \mathrm{x} & \mathrm{y} & \mathrm{\theta} \end{array}\right]^T$. $\theta$ is constrained to 0. (Also the image is misleading, $\theta$ is actually $\theta_1 + \theta_2 + \theta_3$.) The jacobian matrix is $$ J = \\ \left[\begin{array}{ccc} - \mathrm{l_2}\, \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2}\right) - \mathrm{l_1}\, \sin\!\left(\mathrm{\theta_1}\right) - \mathrm{l_3}\, \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right) & - \mathrm{l_2}\, \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2}\right) - \mathrm{l_3}\, \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right) & - \mathrm{l_3}\, \sin\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right)\\ \mathrm{l_2}\, \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2}\right) + \mathrm{l_1}\, \cos\!\left(\mathrm{\theta_1}\right) + \mathrm{l_3}\, \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right) & \mathrm{l_2}\, \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2}\right) + \mathrm{l_3}\, \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right) & \mathrm{l_3}\, \cos\!\left(\mathrm{\theta_1} + \mathrm{\theta_2} + \mathrm{\theta_3}\right)\\ 1 & 1 & 1 \end{array}\right] $$ I'm trying to find $\ddot{q}$: $$ [\ddot{q}] = J^{-1}(q) \cdot \left([\ddot{X}] - \dot{J}(q) \cdot J^{-1}(q) \cdot [\dot{X}] \right) $$ My question is: how can I find $\dot{J}$? What is it?
I'm attempting to build a robot that leverages a particle filter to identify where it is relative to a map that is known. The robot only has IR sensors, so while it is able to determine its distance from landmarks, it does not know what landmark it is "looking" at. I'm following this very helpful book to build my particle filter. In incorporating the sensor measurements, it is assumed that you know both the distance to a landmark and which specific landmark you are looking at. What would need to be done if you know the map and distance measurements, but not the specific landmark that you're observing? Would this require SLAM? Or could you simply increase the probability for particles that are about that distance from a landmark?
From my understanding Whole body control is built upon Computed Torque control. What is the difference between the two? What is the way for whole body control if the dynamic model is not available?
Recently i am working on a hydraulic control system of a vehicle. But in this system I don't have any mathematical modeling of the system so i am not able to get any state space model for the given system. I have to control the length of Piston in Hydraulic Cylinder by giving input of desired piston position. Modeling of this system is established in Matlab/Simulink using physical modeling tools SimScape Toolbox and simFluids. As one approach i had already try to apply a feedback with P-control as show in the figure Matlab System and also the gain of P control is derived by trial and error. Now if i want to apply state obeserver or optimal Control than how can i apply on this given system. And is necessary to derive mathematical model of this system to desing controller? Thank You.
I have a Sick laser scanner LMS 100 attached with my robot which scans the environment. Data acquistion is already done for the laser scanner and now Encoder sensor attached on the wheel gives the distance travelled by the robot. A map in indoor room is already given in our case which consists of 3 landmarks.I am working in Labview environment.My question is now how can i localize the robot based on distance and angle?
Following my last question which confirmed that you can change the logic on a PLC while it is running, I'm now trying to understand the timings with which this happens. Say that a PLC is sent a command to update its logic (I'm assuming that this can be done without using the PLC programming software, but could be wrong), and that the new, pending program code is stored in an area of memory which program execution then switches to when all of the new logic has been downloaded onto it. My questions are this: 1) Does a command need to be sent to switch to the new logic, or does this happen automatically once it has been downloaded? 2) Will the PLC switch to the new logic at the start of the next scan cycle (i.e. before that scan cycle's input scan), or at the start of the logic scan. 3) Would it always be that the new logic takes effect the scan cycle after it has finished being downloaded, or could there be a delay? I am trying to look for timing relationships between the networking data and the updated PLC logic, so need to be strict. If anyone knows of any documentation for commands to update a PLC's logic while it is still running could they please point me to it? Many thanks.
I am trying to write some simple code to perform IK for a 6 DoF redundant robot using the Jacobian pseudo-inverse method. I can solve IK for a desired pose using the iterative method, and I want to now focus on applying constraints to the solution. Specifically I'm interested in Keep the end effector orientation constant as the robot moves from initial to final pose Avoid obstacles in the workspace I've read about how the redundancy/null space of the Jacobian can be exploited to cause internal motions that satisfy desired constraints, while still executing the trajectory, but I am having trouble implementing this as an algorithm. For instance, my simple iterative algorithm looks like error = pose_desired - pose_current; q_vel = pinv(J)*error(:,k); q = q + q_vel; where $q$ is 'pushed' towards the right solution, updated until the error is minimized. But for additional constraints, the equation (Siciliano, Bruno, et al. Robotics: modelling, planning and control) specifies $$ \dot{q} = J^\dagger*v_e - (I-J^\dagger J)\dot{q_0} \\ \dot{q_0} = k_0*(\frac{\partial w(q)}{\partial q})^T $$ where $w$ is supposed to be a term that minimizes/maximizes a chosen constraint. I don't understand the real world algorithmic implementation of this 'term' in the context of my desired constraints: so if I want to keep the orientation of my end effector constant, how can I define the parameters $w$, $q_0$ etc.? I can sense that the partial derivative signifies that it is representing the difference between how the current configuration and a future configuration affect my constraint, and can encourage 'good' choices, but not more than that.
Can anyone share their DH parameters for a simple PRP manipulator. I have some confusion in setting up the axes and obtaining a solution.
Is it possible to fetch the current encoder position of a drive with the old SimpleMotion library (not V2) and a Granite Devices VSD-E through the smRawCommand(...) function?
I'm developing a 2 DOF SCARA with inverse kinematics. It works fine at any desired point, but how can I draw a line? Is there an efficient algorithm to do this?
I don't know if this is the right place to ask this question but I'll give it a try. A friend of mine is using unity 3D to simulate a robot arm, however he's having some troubles when he needs to rotate the robot arm. The arm can already grab stuff with its hands, however seems like sometimes randomly crashes when is rotating the wrist. Here's an image of the robot arm: That's the arm he's working on for the simulation, and you can see that every part of the arm is connected through the unity's inheritance system so it can rotate every piece of the robot arm along with the nested parts that follows the inheritance path. However something seems like is failing when it rotates the wrist of the arm. Three main questions: How should be achieved the arm's rotation properly? How should be achieved picking up things with the hand of the arm properly? How could you make it move by itself on a given point that can reach? I'm not asking for code or anything like that, just how these 3 things should be done properly in order to have fully functional robot arm simulation like the main concepts behind it.
The problem: line follower robots - non linear systems - use linear PID regulation algorithm in order to bring error to zero. However, using linear regulator is not the best way to drive non linear system. There is something like global linearization of non linear systems - an algorithm that can bring regulation error to zero. In order to use it, one has to know kinematics of robot: Coriolis, inertion, gravity and friction matrixes. Those were once measured in EDDA manipulator and are now used in science, and that is how I learned about global linearization. The question: I'd like to identify kinematical dynamical parameters of my line follower robot. I already have kinematical model, since it is simple (2,0) platform. Has anyone got information about good sources on physical parameters identification of mobile robot like this?
I am thinking about a project at my university for doing on-site waste sorting. The problem with having one waste bin for recyclables, compost and landfill and doing the sorting at a facility, is that the organic materials can destroy paper and other recyclable materials. I have searched quite a bit but all of the robotic solutions available that I have found are for facilities. I am looking for a robotic bin to be deployed in replacement of the traditional waste and recycling bin. The budget is approximately $1000. $1000 -- is that materials cost only, or does it include assembly and maintenance costs?: materials and assembly not maintenance for all 4 bins plus robotic sorter it must be bin-sized (whatever that means): let's say 3 ft (height) x 2 ft (length) x 2 ft (width) per bin and there are 4 bins -- recycle paper, recycle plastic, compost, landfill Do you have weight requirements so the consumer can move the bin to the curb, or are you planning to have the robot separate the materials into other, mobile, bins?: There are no weight requirements. We should be able to use a forklift to move it. What would be nice is to have a single waste entry hole which customers use. The device should internally sort the waste into the 4 bins listed above. The entry hole and sorter should be 1 ft (height) x 8 ft (length) x 2 ft (width) to fit directly on top of the set of 4 adjacent bins. What power is available?: It can be plugged into a wall outlet (in case there is one nearby) but should also be able to use a rechargeable battery (in case there isn't). What about environmental concerns, especially if this is to be located outside? Don't forget noise constraints and safety concerns.: The whole point of this is to reduce waste and help the environment. Assume the noise it can make can be as loud as a heater, AC unit, or fan. The entire system should be one box with one waste entry hole -- the rest should be blackbox-ed, so it should be safe. And, most importantly, what characteristics of the materials are you planning to use for doing the actual sorting?: the shorter should be able to detect pure recyclable plastic vs recyclable paper vs organic/food material vs pure waste using either computer vision or chemical techniques or both. What size requirements are there for the products themselves?: The waste whole should be .75 ft x .75 ft so assume the waste is less than .75 ft^3 For example, how to detect organic garbage from non-organic (and do it many times a day without human intervention to "resupply chemicals") can be a topic of research that could take a couple of years itself: Yes this is a good point. However, my question is more focused on whether it's possible to use the robots already commercially available today to solve this problem. I read through how to ask but so here is my specific question: Is there a commercially available robot today that does or can be retro-fitted to do this on-site waste sorting?
I am programming an iRobot Create to follow serial commands using Arduino Uno. I have written the library, and found the serial commands to move the robot forward in the iRobot manual, but I couldn't find the bytes for other movements (backward, right and left). Could you please help me with this. How can I move the robot backward, right and left. I will upload my code library. #include "iRobot.h" #if defined(ARDUINO) && ARDUINO >= 100 //to check if the arduino is plugged and the its number is above 100 #include "Arduino.h" #include "SoftwareSerial.h" // so we can use all pins SoftwareSerial softSerial = SoftwareSerial(10, 11); #endif iRobot::iRobot() //constructor to set the pins { _rxPin = 10; _txPin = 11; } void iRobot::begin() //needs to be called inside setup function { delay(2000); // Needed to initialize the iRobot, the delay is to ensure that each command before this is excucted or there will be overlap // define pin modes for software tx, rx pins for iRobot pinMode(_rxPin, INPUT); pinMode(_txPin, OUTPUT); softSerial.begin(19200); //we set the data rate received by the irobot Serial.begin(19200); // set the data rate sent from the arduino //these two line are necessary from the irobot manual softSerial.write(128); // This command starts the communication. softSerial.write(131); // set mode to safe, it will stop of there is a cliff or a wheel drops or Serial.write("Enter Command: "); // here, if we start serial monitor, we can enter the command } void iRobot::runIt() //needs to be called inside loop function { if (Serial.available()) { String data = String(Serial.read()); //this will read the command, each word will call a function if(data == "forward") goForward(); if(data == "backward") goBackward(); if(data == "left") goLeft(); if(data == "right") goRight(); } } void iRobot::goForward() { softSerial.write(137); // Opcode number for DRIVE, it's understood by the irobot that 137 means drive // Velocity (-500 – 500 mm/s) softSerial.write((byte)0); softSerial.write((byte)200); //Radius (-2000 - 2000 mm) softSerial.write((byte)128); // we should adjust this to make the robot go straight or slightly right or left softSerial.write((byte)0); // we should adjust this to make the robot go straight or slightly right or left } void iRobot::goBackward() { softSerial.write(137); //we should change the bytes to make the robot drive backward //negative vaule of velocity drive the robot forward // Velocity (-500 – 500 mm/s) softSerial.write((byte)0); softSerial.write((byte)200); //Radius (-2000 - 2000 mm) softSerial.write((byte)128); // we should adjust this to make the robot go straight or slightly right or left softSerial.write((byte)0); // we should adjust this to make the robot go straight or slightly right or left } void iRobot::goLeft() { softSerial.write(137); //we should change the bytes to make the robot drive left //radius value should be positive // Velocity (-500 – 500 mm/s) softSerial.write((byte)0); softSerial.write((byte)200); //Radius (-2000 - 2000 mm) softSerial.write((byte)128); // we should adjust this to make the robot go straight or slightly right or left softSerial.write((byte)0); // we should adjust this to make the robot go straight or slightly right or left } void iRobot::goRight() { softSerial.write(137); //we should change the bytes to make the robot drive right //radius value should be negative // Velocity (-500 – 500 mm/s) softSerial.write((byte)0); softSerial.write((byte)200); //Radius (-2000 - 2000 mm) softSerial.write((byte)128); // we should adjust this to make the robot go straight or slightly right or left softSerial.write((byte)0); // we should adjust this to make the robot go straight or slightly right or left } Update: I have connected the robot to the Arduino and tried the code. Unfortunately, the robot didn't move This is my Arduino code: #include <Arduino.h> #include <iRobot.h> iRobot irobot; void setup() { irobot.begin(); } void loop() { irobot.runIt(); } I have connected the the pins 10, 11 and GND on the Arduino to pins 3, 4 and 7 on the robot.
I am trying to understand the EKF theory. Can the state transition function depend on variables that are not part of the state space? For example, the state propagation below depends on the quaternions that keep changing. If I get the quaternions from a very dependable source and I dont want to filter them, can I take them out of the state space? In that case, when calculating the Jacobian I will treat the quaternions as constants even though its a dynamic value from some external sensor. What are the implications of this approach? $$ \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix}_{k+1} = \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix}_{k} + \begin{bmatrix} Rotation Matrix\\ using\\ Quaternions \end{bmatrix} * \begin{bmatrix} u \\ v \\ w \\ \end{bmatrix}_{k} $$ The state space I am using is $$ \begin{bmatrix} x& y& z& u& v& w \end{bmatrix}^T $$ where x y z are the position coordinates and u v w are the linear velocities.
I have a confusion between the wrist frame, the tool frame and the end effector as used in the unmodified DH conventions. Can you differentiate with diagrams?
I have designed a 2 Degree of Freedom robot using dc motor and gearbox. However, I wanted the robot to be compliant as it'll be used in an unstructured environment with humans. I wanted to know if going SEA is a better option then using an impedance or admittance controller. For the moment I have been able to model the environment as spring and damper. But this model is not very robust. Is it a good idea to do a combination of passive and active compliance? I read in G.A. Pratt's report, and he states that SEA has drawbacks when small motions are required, but isn't this the problem with all compliance control. What are the advantages of SEA over conventional active compliance control methods?
Is it possible to build a CNC whose Linear motion system does not contain any timing belt(pulley) or lead screw(threaded rod). I was wondering whether I could directly control the Linear motion by securing wheels of a slider onto aluminum rails & directly connecting the wheels to a stepper motor. The main objective of this question is to find the cheapest method for controlled Linear motion.
I'm currently looking for an industrial robot for a depalletizing application. I had a look at some datasheets but I'm not quite sure how the maximum payload is defined. e.g. the Kuka Agilus weighs 52kg and looks rather strong, but is listed with only 6kg. Is this really the highest weight that the robot can lift or is this the heaviest object that the robot can move with its maximal velocity and I can move heavier stuff at lower speeds?
I am trying to solve inverse kinematics (using the Jacobian pseudoinverse method) for a 7 DoF arm, but because of the way the robot is mounted, the base frame does not coincide with the frame of the first joint, so there is a transformation between base and frame 0. As the Jacobian expresses the joint-end effector velocity relationship w.r.t. the base frame, and also because my target poses etc. are expressed w.r.t the base frame, I encoded the transformation as an extra row in my DH parameters, but these angles are always fixed. Hence, I ended up with 8 rows in my DH although I have only 7 joints. Because of this, my inverse kinematics algorithm, when trying to minimize the end effector pose error, continuously attempts to change the angle of the "first" joint which really isn't a joint at all. Hence, although the algorithm thinks the end effector has reached the target position, in real world it would not, because that base-robot transformation would be invalid for my setup. If I force this angle to be constant after every update of the iteration, the algorithm fails to converge and gets stuck at some pose. So I am guessing my approach for encoding the fixed base-first joint transformation is wrong? How are these transformations usually dealt with?
Is it possible to do pick and place tasks using only forward kinematics and object detection information from a camera? I want to avoid having to do inverse kinematic calculations for my robot. Is there a way I can avoid them for pick and place object sorting?
Can anyone tell me if it is possible to send data through the analogue pins of the Ardupilot to the analog pins of the Arduino? For example, I would like to trigger a button on a channel from my radio control and Ardupilot should send a specific number to the Arduino. Would anyone have any idea how I can do this? Thank you in advance.
I studied the forward and inverse Kinematics of the robot and got a clear understanding. I am in the progress of developing my matlab simulation for a two wheeled differential drive robot. The robot moves in a straight line and has been integrated with PID. I want to show by the animation the movement of the robot. Equation is, The Vector in Initial Frame = Rotation Matrix Inverse x Vector in Robot Frame My Rotational Matrix is, [0 -1 0; 1 0 0; 0 0 1] since the angle is 90. The Robot Frame is [a; b; c] where a = Total translational Speed = (r x w1)/2 + (r x w2)/2 b = In y direction = 0 c = Total Rotational Speed = (r x w1)/2l + (r x w2)/2l where l = 0.12 and r = 0.033 and w1 and w2 are angular velocities of wheel 1 and 2. I have w1 and w2 data in a file as w1 1 2 3 4 5 6 8 9 w2 1 3 4 5 6 7 8 9 I want to run an algorithm in such a way, Mat lab runs the equation and calculate the values of Total translational Speed and Total angular speed in the world frame and plot the graph. I also want to make an animation in such a way a box moves according to this equation. How can I do that? I can run it for one time if I input one value for w1 and w2, But not continuously. Help. Thanks Much.
I am building a line-following robot with a Raspberry Pi Zero, using the explorer PHat. The robot is supposed to follow black, red, green and blue lines and react to the colour, so it should drive faster on a red line and slower on a blue line. I do not have much experience with line followers, so I am not sure what kind of hardware I need. My questions are: Is it possible to follow a red, blue or green line with IR LEDs? Most of the line followers obviously use IR LEDs (like TCRT5000), but they are supposed to only follow black lines. I have a RGB sensor which works quite well with the explorer PHat and I am able to recognize colours very accurately. Is it possible to use this single sensor as a line follower? As the robot should be able to drive on a curvy course a single sensor is probably not enough?
There are many methods of exploring in a Reinforcement Learning setting but two of the most used ones are Ornstein Uhlenbeck (OU) processes and epsilon-greedy approaches. Could anyone elucidate the major advantages/disadvantages of using one over the other? One of the things associated with OU processes is that you need to two additional parameters to bias exploration which might mean additional tuning. I'd be glad if someone could help!
I'm trying to perform stereo camera calibration, rectification and disparity map generation. It's working fine with normal sample data. However, I'm trying to use the dual cameras on an iPhone 7+, which have different zoom. The telephoto lens has 2X zoom compared to the wide angle camera. I ran the images through the algorithm, and it is succeeding, although with a high error rate. However, when I open up the rectified images, they have a weird spherical look to the edges. The center looks fine. I'm assuming this is due to the cameras having different zoom levels. Is there anything special I need to do to deal with this? Or do I just need to crop any output to the usable undistorted area? Here is what I'm seeing: EDIT: I tried using the calibration result from these checkerboard images to rectify an image of some objects, and the rectification was way off, not even close. If I rectify one of my checkerboard images, they are spot on. Any ideas why that happens? These are what my input images look like that result in the spherical looking output image. They are both taken from the exact same position, the iPhone was mounted to a tripod and I used a bluetooth device to trigger the shutter so the image wouldn't get shaken, my code automatically takes one image with each lens. I took 19 such images from different angles, all images show the full checkerboard. The more zoomed in image is the one that rectified to the top spherical looking image. This is the code I am running. I compiled it and ran it using the sample images, that worked fine. Ran it with my own images and here I am. https://github.com/sourishg/stereo-calibration I might just need to crop the result to a certain area. Regardless, it doesn't seem to work cropping or not when I use a picture I took of normal objects. Here is the output of an image of normal objects I ran through the filter: