instruction
stringlengths
40
28.9k
I have a dataset where measurements were taken at 1 Hz, and I am trying to use a Kalman filter to add predicted samples in between the measurements, so that my output is at 10 Hz. I have it working ok when the velocity is linear, but when the direction changes, the filter takes a while to catch up. I am new to Kalman models, so am very likely making some mistakes in my settings. What can I do to improve this? See attached image for an example, the red is measured data, with stepping in between measurements. The blue is the Kalman corrected. std::vector<double> measurements is a dummy data array I am testing with. The main Kalman code is based on Github: hmartiro/kalman-cppkalman.cpp My code is: int main(int argc, char* argv[]) { int n = 3; // Number of states int m = 1; // Number of measurements double dt = 1.0/30; // Time step Eigen::MatrixXd matA(n, n); // System dynamics matrix Eigen::MatrixXd matC(m, n); // Output matrix Eigen::MatrixXd matQ(n, n); // Process noise covariance Eigen::MatrixXd matR(m, m); // Measurement noise covariance Eigen::MatrixXd matP(n, n); // Estimate error covariance // Discrete motion, measuring position only matA << 1, dt, 0, 0, 1, dt, 0, 0, 1; matC << 1, 0, 0; // Reasonable covariance matrices matQ << 0.001, 0.001, .0, 0.001, 0.001, .0, .0, .0, .0; matR << 0.03; matP << .1, .1, .1, .1, 10000, 10, .1, 10, 100; // Construct the filter KalmanFilter kf(dt,matA, matC, matQ, matR, matP); // List of noisy position measurements (yPos) std::vector<double> measurements = { 10,11,13,13.5,14,15.2,15.6,16,18,22,20,21,19,18,17,16,17.5,19,21,22,23,25,26,25,24,21,20,18,16 }; // Best guess of initial states Eigen::VectorXd x0(n); x0 << measurements[0], 0, 0; kf.init(dt,x0); // Feed measurements into filter, output estimated states double t = 0; Eigen::VectorXd y(m); for(int i = 0; i < measurements.size(); i++) { //ACTUAL MEASURED SAMPLE yPos << measurements[i]; kf.update(yPos); for (int ji = 0; ji < 10; ji++) // TEN PREDICTED SAMPLES { t += dt; kf.update(yPos); yPos << kf.state().transpose(); //USE PREDICTION AS NEW SAMPLE } } return 0; }
I am interested in mission planning for multi-robot systems. Given multiple robots and multiple tasks and an environment, I need to specify missions and software should plan for the robot team to accomplish the mission. To be more precise, tasks are just a bunch of waypoints with or without time-stamps or deadlines. More elaborately, a task in abstract sense is something like patrol location A using two robots which essentially can be coded up as two sets of way-points or trajectories for the robots. hence the assertion above that tasks can just be viewed as a bunch of waypoints. So there are multiple tasks and what tasks have to be executed in which order has to be planned by user or software so as to fulfil a mission. I am looking for Github repositories where people have tackled such problems to take inspiration. I am open to any software framework. As a prime example of the kind of work or software I am looking for, FLYAQ - An open source platform for mission planning of autonomous quadrotors is an example. Please share any code or PDF links, if possible.
I have a servo motor (http://robokits.download/documentation/RMCS220x_DCServo_Driver.pdf). I wrote a code in Arduino to rotate it in some defined angles and position. Code is given below: #include <SoftwareSerial.h> // software serial #1: RX = digital pin 10, TX = digital pin 11 // I have Arduino Uno so I created extra RX and TX to send and receive data. // Becuase using inbuilt RX0 and TX0, I was unable to transfer data to motor // and get feedack in PC SoftwareSerial serial1(10, 11); void setup() { Serial.begin(9600); serial1.begin(9600); } void loop() { if (Serial.available() > 0) { delay(5); serial1.println(Serial.readString()); } if (serial1.available() > 0) { delay(5); Serial.println(serial1.readString()); delay(5); } } Using this code what I am able to do is, in terminal I enter some value say "G400" or "R821" etc. and motor rotate accordingly. But, this is not my aim. I don't want to put values manually, instead I used matlab script which give me some angle after calculations. I have to send this value to motor. Say, after calculations, matlab gives 26.4 degree, then I have input value to motor 26.4/0.2 = 132 counts i.e. "G132". Value changes every time for next calculation it may be 40 degree. What should be the coding for this in Arduino as well as in MATLAB. Thanks.
I'm a beginner in controls system, so if there is a nice tutorial on this, please let me know. I have a Simulink model like the following: It takes roll and pitch commands and output velocities and positions. I want to create a PD controller so that the quadcopter in the simulation can move in a 10x10 meter shaped square(starting from bottom left corner) in a clock-wise direction. I'm not sure how to go about this so I've been watching Vijay Kumar's Ariel Robotics lectures on Coursera and it seems like I need to create desired position, velocity, and acceleration at every time step which I think I can do (make a trapezoidal velocity profile and compute the rest). Then Kumar talks about a nested control structure where there are two controllers: a position controller that takes in the desired position, velocity, and acceleration; and there is attitude controller which I believe is the one I have. My question is how do I create a position controller? And is the position controller the one that converts my desired position,velocity, and acceleration into a roll/pitch command?
Anyone know of sample Python code, or a tutorial, on the polar coordinate math for recognizing that three ultrasonic distance readings form a straight line? Deg Distance -10° 20 cm 0° 18 cm +10° 16 cm Once I understand the math, I'll have to deal with the lack of precision. I want my bot to recognize a wall, and eventually recognize a corner.
I am trying to the read the quadrature encoder on a ServoCity 624 RPM Premium Planetary Gear Motor w/Encoder with a SparkFun ESP32 Thing. I am supposed to see 228 counts per revolution, but I see 230-232 instead. Here is my Arduino code: // Global variables modified by ISR int state; unsigned int count; int error; // ISR called on both edge of quadrature signals void handleInterrupt() { // Shift old state into higher bits state = ( state << 2 ) & 15; // Get current state if( digitalRead(34) ) state |= 2; if( digitalRead(35) ) state |= 1; // Check state change for forward or backward quadrature // Flag any state change errors switch( state ) { case 1: case 7: case 14: case 8: count--; break; case 11: case 13: case 4: case 2: count++; break; default: error++; } } void setup() { pinMode(33, OUTPUT); // PWM pinMode(32, OUTPUT); // DIR pinMode(34, INPUT_PULLUP); // QB pinMode(35, INPUT_PULLUP); // QA attachInterrupt( digitalPinToInterrupt(34), handleInterrupt, CHANGE ); attachInterrupt( digitalPinToInterrupt(35), handleInterrupt, CHANGE ); Serial.begin(74880); } void loop() { int new_state; int old_state; // Start motor digitalWrite(32, HIGH); // clockwise digitalWrite(33, HIGH); // 57*4-17 found by trial and error to make a complete revolution for( int i = 0; i < (57*4-17); i++ ) { // Busy wait for change do { // Get current state new_state = 0; if( digitalRead(34) ) new_state |= 2; if( digitalRead(35) ) new_state |= 1; } while( old_state == new_state ); old_state = new_state; } // Stop motor digitalWrite(33, LOW); delay( 1000 ); Serial.print( " state=" ); Serial.print( state ); Serial.print( " count=" ); Serial.print( count % 228 ); Serial.print( " error=" ); Serial.println( error ); } On my scope, quadrature signals are very clean I instrumented the code to look at interrupts and they appear to be in the right places and not too close together. I don't see mechanical slipping, and it would be obvious because over 100 revs, the count slips an entire revolution. But the bottom line is: How can this be failing? If the CPU is missing transitions, I would get illegal transitions and errors. Noise would also cause errors. But I am not getting any errors at all (except a single error at startup).
I designed a mobile robot with my colleagues for our graduation project, the purpose of it is to detect mines (metals) in a specific area We are programming the robot using Python on Raspberry Pi 3 I want it to avoid obstacles using the Pi camera, is that possible through computer vision? I searched a lot but I can't find a full reference that guides me to do it. If it is too hard we'll use an ultrasonic sensor but can it, at least, make a graphical updated map, that marks the position of mines and obstacles on it?
I am currently working at a project involving a heavy disc rotating around its center. The disc weights around 2 kg and has a radius of 0,25 m. At every angle π/6, there exists smaller discs of masses from range 0,1 kg to 1 kg 0,15 m from the center. The radius for these are 0,05 m. A picture to illustrate: I have roughly calculated the moment of inertia when all smaller discs weight 1 kg. Using the formula for a circular plate and Steiner's theorem, the result is: $$ Inertia = \frac{0,25^2}{2} + 12(\frac{0,05^2}{4} + 0,15^2) = 0,34 kgm^2$$ Now I want the disc to be able to spin and stop at these specific angles. Say for instance I want the disc to rotate from 0 to π. This means I need a precise way to control my disc. My plan is to use a servo and some gears to drive this. I need the disc to turn 180° in at least 3 seconds (preferably less). With this angular velocity and inertia, I have realized it might not be the easiest thing to stop this spinning wheel, let alone accelerate it. Here is another image: The motor does not need to be positioned like that, it would also be possible to drive the disc by positioning the motor on the edge of the disc for example. What kind of motor should I be looking for, how would I handle stopping the disc? I am looking for general tips on how to accomplish this.
I need to find a linear actuator that can extend to multiple times it's length. I am going to be fixing the actuator horizontally and it will carry a light vertical load. So far I have thought of using something similar to the mechanism in a scissor lift. However, this is intended for a CNC application and needs high precision, and I'm not sure a scissor lift design would be rigid enough vertically when place horizontal. The system will eventually be feedback controlled so some give can be tolerated. Is there something other than a scissor lift design that would be better suited to this application?
I'm studying the Technical Report of VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator available at: https://github.com/HKUST-Aerial-Robotics/VINS-Mono/blob/master/support_files/paper/tro_technical_report.pdf In section V, where Loop Closure is discussed is writen: "The IMU measurements render roll and pitch angle fully observable, so the accumulated drift only occurs in four degrees-of-freedom (x, y, z and yaw angle). To avoid importing spurious information, we directly optimize pose graph on these four degrees-of-freedom." I'm having trouble to understand this, in my conception, imu noise should propagate in all degrees of freedom. This might have something to do with the gravity vector? Thank You All
I am learning how to make rigs for high speed motion control. Basically, whatever action needs to be executed must be done by a rig that is triggered at an exact moment. In this particular case, I need something that pulls out extremely fast, so that the element that is on top falls without any alteration. In these two pictures I've highlighted the rig that is being used: Zoomed in on the relevant sections: This action happens in 1/2 a second, give or take. What is the motor capable of pulling out that fast, without altering the object on top? In this case the burger bun falls intact, it lands perfectly.
I'm trying to obtain the dynamics of a 6DOF robot. Firstly, I calculated the combined centres of mass (between each link and the respective actuator) in order to calculate the gravity term, since it only depends on the combined centre of mass the the angles of the joints (Yes, I know there's another way which basically takes into account the centre of mass of the links and the actuators, but I just didn't follow that path). My calculations are correct since the robot is effectively compensating the gravity, but now I want to calculate the remaining terms (mass and coriolis). Since I have the combined centre of mass between each "system" (link + actuator), and the datasheet of this robot only gives me the Inertia tensors at the center of mass of each part, I need to know the equivalent inertia tensor in the "new" centre of mass (the combined one). Now, I've done my research and found little information on the web about this. I did found out that I could probably follow the parallel axis theorem, but I've seen people saying that it is based on the premise that the object is planar (which means it would only be applied to 2D objects). My question is: can I apply this theorem in 3D? If so, please explain to me what exactly do I have to do, and if not, what other options I have to follow. Let me know if you understood what I'm after and thanks in advance.
I am a programmer with a lot of experience in IoT but somewhat new to actuators and robotics. I would like to create a "Cable Car" which consists of two stepper motors and a pulley. The motors would be located one next to the other, and the pulley on the opposite wall. Each stepper motor would have a spool and be connected with fishing wire. In the middle would be a styrofoam boat. The idea would be to move the boat along the wire from one wall to another based on different metrics and calculations. Ideally it will travel from one end to the other in a months time and in slow increments. Diagram using ASCII: |___________________| |x o| |x_________________o| | | | | | <-- BOAT --> | | | | | x = stepper motor o = pulley _ = fishing wire (starts at top motor, goes to pulley(s), continues and ends at bottom motor) | = wall My question is: What type of stepper motors should and controller should I use? I estimate that the wire + the styrofoam boat will not weigh more than 3 pounds total. It will it hang at a height of 10 feet from the floor. I am hoping to achieve this with a Rasperry Pi I have lying around. I am confused about various torque requirements, voltage & amperage ratings etc. If someone has a recommendation for hardware I would be extremely grateful.
I'm are going to create a LQR to control a system. The problem is to choose the Q and R weighting matrices for the cost function. The Q and R matrices are going to minimize the cost function so the system are going to be optimal. I'm using Scilab and Scilab have a good library for optimal control. Scilab have a built function named lqr() to compute the gain matrix K, which is the LQ regulator. But the problem is still to choose those weighting matrices. I don't know here to start. I just might start with the identity matrix as Q and just a constant as R. But the gain matrix K does not make my model go smooth. No one can say that this is the real Q and R weighting matrices for the system. As a developer, I choose the weighting matrices. But why should I do that when I can choose the K gain matrix directly? So I just made up my own numbers for the gain matrix K and now my model is very smooth. All I did just do was to guess some numbers for the gain matrix K and simulate and look at the result. Was it still bad, I might change the first element for the gain matrix to increase the position, or change the second element in the gain matrix K, to speed up the velocity for the position. This works great for me! Guessing and simulate and look at the results. I choosing the LQ-technique for two main reasons: It gives multivariable action and can reduce noise by using a kalman filter. A PID cannot do that. But here is my question: Will this method give me an optimal control just by guessing the gain matrix K and changing the values depending how the simulation results looks like? I'm I happy with the results, I might quit there and accept the gain matrix K as the optimal LQR for the system.
I'm trying to get a deeper understanding of the theory behind wheeled self-balancing robots. Can anyone point me to a text on dynamic modelling and control theory that might help?
I am currently working on a project which which involves surround view. I have 4 fish eye cameras and are fixed at 4 sides of a car.The fish eye cameras are corrected for radial distortion. After radial correction,Each camera sees a pattern in the Common FOV of adjacent camera to get the points on the ground plane. Pattern in the common FOV of adjacent camera Now for each camera I need to warp those points to a plane which is the birds eye view. Right now I choose those 8 red points, map first 4 points on a square and the other 4 points on another square on the same line,since I know that both squares lie parallel to each other at some distance,for front and back images and use the same points for left and right also appropriately so that left image comes left and right image comes right of the result. Then I calculate Homography Matrix for each image(front,left,right and back) using the points in the Image and the birds eye plane. I map the points such that warped front image sits at the top, left sits at the left side and right at the right side and then back sits at bottom of image . Example, Front sits at the top of result image Left sits at the Left of result image I do this so that I can stitch properly,forming a composite view. My final stitched image looks like below. As you can see, the geometrical alignment is not proper. QUESTIONS What is the right way of registering each image with the ground plane. From the object points shown as red dots, what is the proper way to get the corresponding points in the bird's eye plane.
At every timestep my robot gets sensor measurements from a scanner that finds three beacons with known poses $B_1 = (B_{1x}, B_{1y})^T, B_2 = (B_{2x}^T, B_{2y}) , B_3 = (B_{3x}, B_{3y})^T$ these measurements include the distance and angle to the beacon, the measuremnt for $B_1$ would be $m_{1t} = (d_{it}, \omega_{1t})^T$. and equivalently for the other beacons. From these measurements i want to calculate the robots pose containing its position and orientaion $x_t = (p_{xt}, p_{yt}, \Theta_{xt})^T$. Calculating the position can be done by trilateration, but I can't seem to find a way to get the orientation of the robot from these measurements. Is there possible a model to calculate both in a single calculation? If not a solution for finding the orientation of the robot would be great.
We have a high-resolution Riegl laser scanner and mounted atop it a Resonon Pika L, which is a hyperspectral camera which records one spatial column at a time, using the second dimension of the sensor for the wavelength spectrum from 400 to 1000 nanometers. Now we would like to label each scan point with hyperspectral data. For this, we record a fixed angular window with the laser scanner stitch together the individual spatial columns from the hyperspectral camera rotating with the scanner Identify chessboard corners in the point cloud also find them in the stitched hyperspectral panorama (reduced to rgb). Using OpenCV's calibrateCamera we try to get a camera matrix, which I assume will only be valid for this particular panorama size (if at all) and then we could in theory obtain rotation and translation between scanner and panorama coordinates using solvePnP. Is this a valid way to go about solving this problem? There doesn't seem to be much prior art in this regard.
I have a 1-DOF electromechanical application whereby I am controlling the contact-force at the tip of the end-effector. The force sensor on the end-effector unfortunately gives me force data and an incredibly low sampling rate at 20 Hz, and I cannot do anything about it. I generated a linear model for my plant and determined I will probably need a sampling rate of at least 320 Hz to follow the rule of thumb (Ts is 1/10th of your time constant). With the current setup, I of course have poor disturbance rejection (i.e., when the surface moves into the end-effector) with a PI controller . Fortunately, the wall motion is periodic and fairly predictable. I implemented a modified Smith predictor that can reject periodic disturbances. This works fairly well but I am interested in exploring other options. Is there anything else I can take a look into that may help with this situation? For instance, I was considering taking a look into implementing a Kalman filter, since I can use it to predict the next sample in the loop, which may improve performance - I am not sure if that is right. Regards,
I am considering competing in the robogames, but the rules require that R/C bots have digitally-mated pairs. I am not sure what these pairs are. For reference here are the rules for robot sumo and combots. Robot Sumo Rules Combot Rules
I have a 1100kv brushless motor from dys with a propeller as shown. I have attached this motor to two pvc pipes joined together as shown and packed the electronics inside polyform to protect from water. When I did the test run of this model the boat didn't move. The motor was just creating the turbulence in the water. So I your help of where I am going wrong 1. Is it wrong to place brushless motor in water(although have seen videos of people placing their motors in water) ? 2. Is the propeller design/size is wrong (maybe its a simple fan propeller not made for water, really no idea)? 3. Or the motor rating is not enough (thought lower kv rating would generate more torque and therefore more thrust) ? I used a 11.2V 2200 mah 25C lipo battery.
I am trying to use a multilayer perceptron to make a flight controller for a ROV. I have a MPU-9250 IMU and I need to remove the noise from the sensor before I can train my MLP. The IMU has a accelerometer, gyro and a magnetometer. I know my state vector is supposed to be [acc_x, Acc_y, Acc_z, gyro_x, gyro_y, gyro_z, mag_x, magy, mag_z] I am not sure about the rest. Since I want the clean Acc, gyro and mag's x, y and z values can I just use an identity matrix for everything in EKF? I want to pass the IMU data to the MLP after cleaning it and try to predict the control signal for all my thrusters. For my dataset I recoded the IMU information from a ROV2 and I also recorded the pwm signals for each thruster. If it does work I won't be able to control things like PID values of the thrusters but for now I don't really care about it.
Do we have to take Electronic Speed Controllers (ESC) into account when calculating the total current draw of the system? As far as I can understand, the motor and ESC are connected in series, so we take the component that draws the most current, which is the motor in this case. Then, all combinations of a motor and ESC are connected in parallel, thus the currents add up. Am I thinking right? Or am I missing something here?
So the questions from the title is a bit hard to explain, so here is a better explanation of my question. I want to get into the FPV quad flying hobby, so I decided to pick up a very starter quad, it's the Eachine Wizard X220. From the specs of the quad its lift off weight is 535 grams and the quad itself (without the battery but with all electronics) weights in at 364 grams. So I just need to pickup a battery everything else is accounted for. I want to get a 4S battery 1500 mAh which is 160 grams but that brings the total weight to 524. So, my question is would the quad not be as efficient with this weight. Or should I just stick a 3S 1300 mAh and have a total weight of 484 grams?
I don't know if someone can help me with this but I'm calculating the dynamics of a 6DOF robot using the Newton-Euler iterative dynamics algorithm. I'm following the recursive method (inwards and outwards) explained in the book Introduction to Robotics Mechanics and Control (Pages 175-176). After putting down on MATLAB the calculations, I started to check if the gravity compensation, g term, made sense. I had calculated the gravity term from the Lagrange approach before so I knew the set of torques had to be the same for a specific pose. Although the values are almost similar (one actuator has some considerable deviation, still unknown to me as to why). Now, here's the thing: the robot is the Kinova JACO v2 arm, and if one assumes that only the gravity effect is taking place, no torque is assumed for the first actuator (its associated link is at the base). Indeed this is visually clear, and the Lagrangian approach based on the potential energy corroborates that, giving me a torque vector with no torque being sent to the first actuator. My problem is just that... The Newton-Euler iterative algorithm is based on the balance of the forces between the links. And since the contributions of the forces are summed up (when performing the outwards calculations) the torque sent for the actuator 1 is not zero and has actually the value that would be sent to the actuator 2. Basically a "shift" was made, and the torque for actuator 1 should've been for actuator 2, and so on. I don't know if you can get any insight from this... But I've tried to recheck my calculations and I can't seem to find any problem with them... Please if you have any suggestions I'll be grateful. Thanks.
I've written a grid based DFS algorithm with a PID-based steering system to maneuver a 30cm2 square-grid maze all in Python. The robot is a 4 wheel drive with an approximate size of 20 cm. The robot has a BeagleBone Green Wireless controller which is connected by USB to the RPLIDAR A1. At this current moment, the robot is underutilizing the LIDAR and I want to begin to learn SLAM. However, the environment is highly predictable which I think makes a full SLAM counterintuitive. I would also like the code to be low CPU strain. I've seen people converting a conventional SLAM into a grid based but only after the calculations are complete. I was wondering if there is a way to do a Grid Based SLAM right from the start (assume its position and map with a grid). Accuracy isn't hugely important here as long as it understands a tile and the robot is able to avoid walls. Any advice, tips or suggestion is appreciated. How would you store the map? How would you locate the position of the robot? How would you map the LIDAR's values?
I am using SLAM for the autonomous vehicle navigation. I want to the specify the working zone for the vehicle before the start of navigation, or can we restrict the working zone of the vehicle using SLAM approach for navigation.
As a part of my research work I'm supposed to build or use an existing autonomous vehicle simulator, in which I can also control the behaviour of the traffic in the system, I was searching on Google and posted on reddit to check if there is any such simulator (open source), After all the exhaustive attempts, I haven't found anything yet. I'm trying to build this in V-rep or any other ROS compatible simulators. The end product has to look something like the one shown here, YouTube - Webots: The autonomous vehicle simulator, and would also give control over the traffic behaviour. Any help/suggestion is greatly appreciated. PS: I do know that there's this one written in Unity, Github:A self-driving car simulator built with Unity (An Open Source Self-Driving Car), but we are looking to import OSM data and also Unity doesn't have a good ROS integration mechanism.
We're using a stepper motor to control a knob on a piece of audio equipment. The stepper is coupled to the potentiometer shaft with a simple coupler. Right now, in order to get to "zero" on the potentiometer, we "crank" the stepper motor -3600 steps (about one full rotation). This creates unnecessary torque on the pot. I'm looking for a hardware solution to avoid this. Here are a few ideas I had: Some sort of zero detection - Know when the knob gets to 0 and stop the stepper from turning when it reaches that point. A coupler that can detect excessive force on the shaft. Basically some sort of spring loaded coupler that will "click" when the rotation gets to "zero" and close a circuit. Also open to other ideas... We're currently using Arduinos with Firmata firmware connected to Node.JS, but this is mostly a hardware issue.
I am interested in knowing a software platform which deals with multi-robot navigation and:- has implemented local collision avoidance algorithms for each robot to prevent collisions between robots has independent controllers such as PID for each robot Most likely such a framework will be in ROS or python. I am interested in open source codes such as those in github. The goal is to have a platform where a set of way-points can be assigned to each robot and they can avoid collisions and reach their way-points. At least this much ability should be already implemented.
I'm trying to do a simple communication between an Intel Edison and Open Mv 7 Camera through UART (Tx/Rx). I assumed this would be a simple task but both sides receive "�". Both are using python, the Intel Edison using Pyserial to communicate while OpenMv7 is using PYB library. Open Mv7. import time from pyb import UART uart = UART(3, 9600) while(True): if(uart.any() > 0): print(uart.read()) Intel Edison import serial import time ser = serial.Serial(port = "/dev/ttyO1", baudrate=9600) ser.close() ser.open() print("Online") ser.write("Hello World!")
I am curious to know why we can't apply control algorithm like PID on the weighted signal of Roll and Roll rate in a quad-copter instead of using two loops to control them independently. Fundamentally PID will make the input signal to approach $0$. In the case of (Roll + Roll-Rate), which would be $ w_1\theta + w_2\dot\theta$, the sum becomes $0$ when individually both tend to $0$; since we have an exponentially decaying curve. So why do people generally use sequential loops to control each of them separately? (Roll is just for example)
I am working on a 7 DOF serial manipulator and was trying to use ikine to get the joint coordinates for simple 2 DOF robot. Even though I am using the masking vector as [ 1 1 0 0 0 0 ], I am getting error stating: Number of robot DOF must be >= the same number of 1s in the mask matrix This is my 2 DOF robot L1 = Link('d', 0, 'a', 1, 'alpha', 0); L1.m=50; L1.r=[0.5,0,0]; L1.I=[0,0,0;0,0,0;0,0,10]; L1.G=1; L1.Jm=0; L2=Link(L1); r2=SerialLink([L1,L2]); r2.name='POLIrobot'; r2.gravity=[0;9.81;0]; q0=r2.ikine([eye(3),[0.2;0;0];[0,0,0,1]],[0,0],[1 1 0 0 0 0]); Can anyone please help and explain why is it happening?
Which is the best visual fiducial marker (2D barcode) for detection and robust and accurate pose estimation? Im not looking for a fiducial marker which can store lot of information. The main goal is just to get the pose of the marker with respect to the camera as accurate as possible.
Look at this picture. This is the seperation principle diagram. It is an LQG controller which going to control the real life process. What I want to do, is to create a state space model for this seperation principle system, including the real life process. A LQG controller is a LQR controller together with the Kalmanfilter. Kalmanfilter is also called an observer. The LQR controler is a feedback gain matrix L and the kalmanfilter is just a mathematical description of the real life system with a gain matrix K. r(t) is the reference signal vector which describe how the system's states should hold e.g. temperature or pressure. y(t) is the output from the real life process. $\hat{y}$ is the estimated output from the kalmanfilter. d(t) is the disturbance vector for the input. That's a bad thing, but the Kalman filter are going to reduce the disturbance and noise. u(t) is the in signal vector to the real life system and the kalmanfilter. n(t) is the noise vector from the measurement tools. x(t) is the state vector for the system. $\dot{x}$ is the state vector derivative for the system. $\hat{x}$ is the estimated state vector for the system. $\dot{\hat{x}}$ is the estimated state vector derivative for the system. A is the system matrix. B is the in signal matrix. C is the output matrix. L is the LQR controler gain matrix. K is the kalmanfilter gain matrix. So....a lot of people create the state space system as this: $$ $$ For the real life system: $$ \dot{x} = Ax + Bu + d$$ For the kalmanfilter: $$\dot{\hat{x}} = A\hat{x} + Bu + Ke$$ But $u(t)$ is: $$u = r - L\hat{x}$$ And $e(t)$ is: $$e = y + n - \hat{y} = Cx + n - C\hat{x} $$ And then...for some reason, people says that the state space model should be model by the state estimation error: $$\dot{\tilde{x}} = \dot{x} - \dot{\hat{x}} = (Ax + Bu + d) - (A\hat{x} + Bu + Ke) $$ $$ \dot{\tilde{x}} = (Ax + Bu + d) - (A\hat{x} + Bu + K(Cx + n - C\hat{x})$$ $$ \dot{\tilde{x}} = Ax - A\hat{x} + d - KCx - Kn + KC\hat{x} $$ And we can say that: $$\tilde{x} = x - \hat{x} $$ Beacuse: $$\dot{\tilde{x}} = \dot{x} - \dot{\hat{x}}$$ The kalmanfilter will be: $$ \dot{\tilde{x}} = (A - KC)\tilde{x} + Kn$$ The real life process will be: $$ \dot{x} = Ax + Bu + d = Ax + B(r - L\hat{x}) + d = Ax + Br - BL\hat{x} + d$$ But: $$\tilde{x} = x - \hat{x} \Leftrightarrow \hat{x} = x - \tilde{x}$$ So this will result for the real life process: $$ \dot{x} = Ax + Br - BL(x - \tilde{x}) + d = Ax + Br - BLx + BL\tilde{x} + d$$ So the whole state space model will then be: $$ \ \begin{bmatrix} \dot{x} \\ \\\dot{\tilde{x}} \end{bmatrix} =\begin{bmatrix} A - BL& BL \\ 0 & A-KC \end{bmatrix} \begin{bmatrix} x\\ \tilde{x} \end{bmatrix}+\begin{bmatrix} B & I & 0\\ 0 & 0 & K \end{bmatrix}\begin{bmatrix} r\\ d\\ n \end{bmatrix}\ $$ Youtube example: https://youtu.be/H4_hFazBGxU?t=2m13s $I$ is the identity matrix. But doesn't need to be only ones on digonal form. $0$ is the zero matrix. The Question: $$ $$ If I write the systems on this forms: $$ \dot{x} = Ax + Bu + d = Ax + B(r - L\hat{x}) + d$$ For the kalmanfilter: $$\dot{\hat{x}} = A\hat{x} + Bu + Ke = A\hat{x} + B(r - L\hat{x}) + K(y + n - \hat{y})$$ Beacuse $u(t)$ and $e(t)$ is: $$u = r - L\hat{x}$$ $$e = y + n - \hat{y} = Cx + n - C\hat{x} $$ I get this: $$\dot{x} = Ax + Br - BL\hat{x} + d$$ $$\dot{\hat{x}} = A\hat{x} + Br - BL\hat{x} + KCx + Kn - KC\hat{x}$$ Why not this state space form: $$ \begin{bmatrix} \dot{x} \\ \\\dot{\hat{x}} \end{bmatrix} =\begin{bmatrix} A & -BL \\ KC & [A-BL-KC] \end{bmatrix} \begin{bmatrix} x\\ \hat{x} \end{bmatrix}+\begin{bmatrix} B & I & 0\\ B & 0 & K \end{bmatrix}\begin{bmatrix} r\\ d\\ n \end{bmatrix}\ $$ Youtube example: https://youtu.be/t_0RmeSnXxY?t=1m44s Who is best? Does them both works as the LQG diagram shows? Which should I use?
I have created a robotic gripper. However, I need help in the control circuit: There are two buttons, the upper and lower one (connected to the timer): the upper one is two states timer either up or down (will be replaced with an active low pin in PCB design). the lower one is a push button that must be pushed and let back to its initial position to give a pulse to the monostable timer to make the servo rotate long enough to just close the gripper (calculate with the RC circuit). My problem: I want to replace the down button with some component so as to give a quick pulse when the upper button change state. For example, the button was 0 and went 1 -> Component -> a small pulse to drive the timer (not a pulse that will last until the state changes): Where, Green: is button state Red: trigger pulse needed.
I'm learning about inverse kinematics with Jacobians, and getting a little confused. So, let's say I have a robot arm with two joints with angles Y = (a, b) whose tip I want to move along a certain direction in 2D space X = (u, v). The Jacobian J tells me how much the arm will move in 3D space, with respect to rotations of the joints: J = dY/dX. Then, in order to move the arm in a certain direction, I can find the inverse of the Jacobian J-inv, and then multiply this by the 3D direction I want the tip to move in: X = J_inv * Y. However, let's say that at a particular joint configuration, the first joint (with angle a) is much more able to move the tip in the desired direction, than the second joint. So, dY/da >> dY/db. Intuitively, it would therefore make sense that greater velocity is given to the first joint than the second joint, to take advantage of this. But this does not seem to be the case. If X = J_inv * Y, then J_inv will cause a greater response from the joint which finds it harder to move the tip in the desired direction, i.e. the second joint, because J_inv is effectively finding db\dY, which is greater than da\dY. So why would the joint which finds it harder to move the tip along the desired direction, actually be given a higher velocity than the joint which finds it easier?
I have an Orange Pi which I would like to use as the computer for a robotic project. It supplies around 15 mAh power directly from the board, barely enough for a single LED so I will have to use a motor controller and an external power supply. My question is, is there a limit to how big of a motor I use with this small orange Pi zero? As long as I have the appropriate power supply and controller for the motor I feel it shouldn't be an issue. What are some things to keep in mind?
I have a state space model which look like this: $$ \ \begin{bmatrix} \dot{x} \\ \\\dot{\tilde{x}} \end{bmatrix} =\begin{bmatrix} A - BL& BL \\ 0 & A-KC \end{bmatrix} \begin{bmatrix} x\\ \tilde{x} \end{bmatrix}+\begin{bmatrix} B & I & 0\\ 0 & 0 & K \end{bmatrix}\begin{bmatrix} r\\ d\\ n \end{bmatrix}\ $$ $$ \begin{bmatrix} y\\ e \end{bmatrix} = \begin{bmatrix} C &0 \\ 0& C \end{bmatrix}\begin{bmatrix} x\\ \tilde{x} \end{bmatrix} + \begin{bmatrix} 0 & 0 &0 \\ 0 & 0 &1 \end{bmatrix}\begin{bmatrix} r\\ d\\ n \end{bmatrix} $$ This state space model represents this picture: If you still not understand how. Look at this video: https://youtu.be/H4_hFazBGxU?t=2m13s Notice the estimation error: $\tilde{x} = x - \hat{x}$ So I've made som Octave code which are very similar to MATLAB code if you have the MATLAB control package installed. Octave have a free control package and symbolic package to use. clc clear % ladda bibliotek - Load GNU octave library - Free to download pkg load symbolic pkg load control % Parametrar m1 = 10; m2 = 7; M = 1000; Ap = 40; Am = 20; Pp = 20; Pm = 10; b1 = 3000; b2 = 1000; L = 0.1; g = 9.82; mu = 0.3; % Tillstånd vid statiskt - When this model are a statical point % the state are: x1 = 0.65; x2 = 0; x3 = 0.2; x4 = 0; x5 = 5*pi/180; x6 = 0; % Symboliska variabler syms k1 k2 k3 % Statisk beräkning - Statistical calculations using symbolic solve Equation1 = -k1/m1*x1 + k1/m1*x3 - b1/m1*x2 + b1/m1*x4 + Ap*10/m1*Pp - Pm*Am*10/m1*x2; Equation2 = k1/M*x1 - k1/M*x3 + b1/M*x2 - b1/M*x4 - g*mu*x4 - k2/M*x3 + k2*L/M*x5; Equation3 = 3*k2/(m2*L)*x3 - 3*k2/m2*x5 - 3*k3/(m2*L^2)*x5 - 3*b2/(m2*L^2)*x6 + 3*g/(2*L)*x5; [k1, k2, k3] = solve(Equation1 == 0, Equation2 == 0, Equation3 == 0, k1, k2, k3); k1 = double(k1); k2 = double(k2); k3 = double(k3); % Dynamisk beräkning - Dynamical calculations for the state space model A = [0 1 0 0 0 0; -k1/m1 (-b1/m1-Pm*Am*10/m1) k1/m1 b1/m1 0 0; 0 0 0 1 0 0; k1/M b1/M (-k1/M -k2/M) (-b1/M -g*mu) k2*L/M 0; 0 0 0 0 0 1; 0 0 3*k2/(m2*L) 0 (-3*k2/m2 -3*k3/(m2*L^2) + 3*g/(2*L)) -3*b2/(m2*L^2)]; B = [0; Ap*10/m1; 0 ; 0; 0; 0]; % Input matrix C = [0 1 0 0 0 0]; % Output matrix I = [0; 1; 0 ; 0; 0; 0]; % Disturbance matrix. % LQR Q = diag([0 0 0 40 0 0]); R = 0.1; L = lqr(A, B, Q, R); % The control law - LQR gain matrix % LQE Vd = diag([1 1 1 1 1 1]); Vn = 1; K = (lqr(A',C',Vd,Vn))'; % A way to use LQR command to compute the Kalman gain matrix % LQG a = [(A-B*L) B*L; zeros(6,6) (A-K*C)]; b = [B I zeros(6,1); zeros(6,1) zeros(6,1) K]; c = [C zeros(1,6); zeros(1,6) C]; d = [0 0 0; 0 0 1]; sysLQG = ss(a, b, c, d); % Simulate the LQG with disturbance and white gaussian noise t = linspace(0, 2, 1000); r = linspace(20, 20, 1000); d = 70*randn(size(t)); n = 0.1*randn(size(t)); x0 = zeros(12,1); lsim(sysLQG, [r' d' n'], t, x0) This will result if I got noise $0.1*randn(size(t))$ . $y1 = y$ and $y2 = e$ But let say I got no noise at all! I got this: That means that $\tilde{x}$ has no function at all! Something is wrong. I have tried diffrent values at the $C$ matrix, but have not get an estimation error to look at. Question: $$ $$ What is wrong with my model? I want to control this so the model and stand against the disturbance and noise. But now, the model just accepting the disturbance and noise. Does the observer(kalmanfilter) need to have disturbance too? EDIT: $$ $$ If I simulate this new state space model with the same options and code I had before: $$ \ \begin{bmatrix} \dot{x} \\ \\\dot{\tilde{x}} \end{bmatrix} =\begin{bmatrix} A - BL& BL \\ 0 & A-KC \end{bmatrix} \begin{bmatrix} x\\ \tilde{x} \end{bmatrix}+\begin{bmatrix} B & B & 0\\ 0 & B & K \end{bmatrix}\begin{bmatrix} r\\ d\\ n \end{bmatrix}\ $$ $$ \begin{bmatrix} y\\ e \end{bmatrix} = \begin{bmatrix} C &0 \\ 0& C \end{bmatrix}\begin{bmatrix} x\\ \tilde{x} \end{bmatrix} + \begin{bmatrix} 0 & 0 &0 \\ 0 & 0 &1 \end{bmatrix}\begin{bmatrix} r\\ d\\ n \end{bmatrix} $$ I get this. All I did was to add disturbance to the kalmanfilter too. And insted of $I$ matrix, I replace it with $B$ matrix. But the problem is that if I simulate an LQR with only state feedback, not the kalmanfilter (observer). I get this: There are both red and green color in this picture. You have to zoom in. The red color is the LQG simulation and the green color is the LQR simulation(the simulation wihout the kalmanfilter). You can se that the kalmanfilter does not filter anything. Why? Here is a short code sample: % LQG a = [(A-B*L) B*L; zeros(6,6) (A-K*C)]; b = [B B zeros(6,1); zeros(6,1) B K]; c = [C zeros(1,6); zeros(1,6) C]; d = [0 0 0; 0 0 1]; sysLQG = ss(a, b, c, d); % Simulate the LQG with disturbance and no white gaussian noise t = linspace(0, 2, 1000); r = linspace(20, 20, 1000); d = 2*randn(size(t)); n = 0*randn(size(t)); x0 = zeros(12,1); [yLQG, t, xLQG] = lsim(sysLQG, [r' d' n'], t, x0); % Simulera LQR med störning sysLQR = ss(A-B*L, [B B], C, [0 0]) x0 = zeros(6,1); [yLQR, t, xLQR] = lsim(sysLQR, [r' d'], t, x0); plot(t, yLQR, 'g', t, yLQG(:,1), 'r');
Arduino is a digital mikrocontroller. But I wonder if it's possible to implement an continuous time feedback regulator in an Arduino microprocessor? Continuous time feedback regulators such as PID: $$ K = P(e(t)-D\frac{\mathrm{d} }{\mathrm{d} x}e(t) + I\int_{0}^{\infty} e(t) dt) $$ Or LQG regulator (this is a LQR with kalmanfilter only, not the model): $$ \dot{\hat{x}} = (A - KC)\hat{x} + Bu + Ky + Kn - KC\hat{x} $$ $$ u = r - K\hat{x} $$ Or do it need to be a digital feedback regulator? I mean....those feedback regulators works exellent by using operational amplifiers. I know that operational amplifier works in real time. But an Arduino working in 16 Mhz speed, and that's very fast too.
I have an MPU6050 IMU and I would like to mount it on an FSAE car and use it to measure the yaw, pitch, roll, and angular velocities as it drives. As it's impossible to mount it perfectly flat and align the IMU axes with the axes of the car, I am looking for a way to calibrate and compensate for the rotational offset of the car's frame and the IMU's frame. From the IMU I can get quaternions, Euler angles, raw acceleration and angular velocity data, or yaw, pitch, and roll values. I imagine the solution will involve matrix and trig calculations, but I didn't pay nearly enough attention in multivariable calc to figure this out.
This is a kalmanfilter As you can see, the process noise(disturbance) is not going to the kalmanfilter. But the state space model for the state feedback system is written as this: https://youtu.be/H4_hFazBGxU?t=5m43s So what is right and what is wrong? Should a kalmanfilter have disturbance as input too? Like this. With disturbance as input: $$ \ \begin{bmatrix} \dot{x} \\ \\\dot{\tilde{x}} \end{bmatrix} =\begin{bmatrix} A - BL& BL \\ 0 & A-KC \end{bmatrix} \begin{bmatrix} x\\ \tilde{x} \end{bmatrix}+\begin{bmatrix} I & 0\\ I & K \end{bmatrix}\begin{bmatrix} d\\ n \end{bmatrix}\ $$ Or this. Without disturbance as input: $$ \ \begin{bmatrix} \dot{x} \\ \\\dot{\tilde{x}} \end{bmatrix} =\begin{bmatrix} A - BL& BL \\ 0 & A-KC \end{bmatrix} \begin{bmatrix} x\\ \tilde{x} \end{bmatrix}+\begin{bmatrix} I & 0\\ 0 & K \end{bmatrix}\begin{bmatrix} d\\ n \end{bmatrix}\ $$
RC Servos are great because they are low cost, widely available, easy to control, and pretty accurate. One disadvantage is that they usually have very limited range of motion. This is because they are mostly for use on actuating RC control surfaces that rarely move more than 120 degrees. Our FTC robotics teams uses them a lot; but, often we need to rotate more than 120 deg and often would like 360 deg or more. Robot design requires the solution to be small and light weight as it will usually be at the end of an extension arm. The game rules (and practicality) requires the solution to use a 3-wire RC servo. Also, space and alignment issues usually make using external gear sets problematic. Last season we needed a large "grabber" at the end of our arm and resorted to using a continuous rotation servo the rotated until it torque limited. This worked but was far from ideal as it over-stressed the servo and we had minimal control on the "grabber" - we could open it or close it. Our ideal solution would be small, light-weight, and inexpensive (add less than 50% to the weight, cost, or the length of any dimension of motor). Given our constraints, how can we rotate an axis more than 360 deg and still maintain positional accuracy?
I'm repairing (hopefully) a 12 V DC motor (Johnson Electric HC971(2)LG-101). The motor has a coil in the motor cap between each terminal and the corresponding brush - two coils (inductors) one on each side of the armature electrically speaking. It also has what I think is a resistor on one side and what I think is a (broken) capacitor between the two terminals. What is the function of the two inductors in series between the terminals and brushes? I'm sure this info it's out there somewhere but I haven't found it online after several days of searching.
I hope you can help me and this is the right forum to ask. In the process of building and programming my own Quadcopter, I encountered the term Euler angles. I took some time to understand them and then wondered why they are used in multicopter systems. In my understanding Euler angles are used to rotate a point or vector in a coordinate system/ to express that rotation. I now wonder why i should use Euler angles to compute the orientation of the quadcopter as I could easily(at least i think so) compute the angles by themself, like $$ \theta = arctan(y/z) $$ $$ \phi = arctan(x/z) $$ (just using accelerometer, where $x, y, z$ are axis accelerations and $\theta, \phi$ are pitch and roll, respectively. In the actual implementation I do not only use the accelerometer, this is just simplified to make the point clear). Where exactly are Euler angles used? Are they only used to convert desired trajectory in the earth frame to desired trajectory in the Body frame? I would be very glad if anyone could point this out and explain the concept/ why and where they are used further. To clarify: I do know that Euler angles encounter gimbal lock, that they are three rotations about $x, y, z$ axis and how they generally work(I think). @Christo gave a very good explanation. My question now is, why are they used? Isn't it counterproductive to apply the yaw rotation, then pitch, and then roll? -Earth frame X,Y,Z rotation about Z(psi) ->Frame 1 x', y', z' rotation about y'(theta) ->Frame 2 x'', y'', z'' rotation about x''(phi) ->Body frame x, y, z and vice versa. Why? I would just have said: pitch = angles between X and x roll = angle between Y and y yaw = angle between x-y-projection of the magnetic field-vector and the starting vector(yaw is kinda different). (Notice the difference between uppercase and lowercase, look at the Earth-to-Body-Frame for notation). Tied with this i wonder why the correct formula for pitch($\theta)$ should be $$\theta = \tan^{-1}\left(-f_x/\sqrt{f_y^2+f_z^2}\right)$$ I would have thought $$\theta = \tan^{-1}\left(-f_x/f_z\right)$$ suffices. Maybe I have some flaw in my knowledge or a piece of the puzzle is still missing. I hope this is understandable, if not feel free to ask. If this gets too crowded, I can always ask another question, just make me aware of it. If anyone could explain how to use quaternions to express orientation I would be very thankful, but I can also just ask another time. I get the concept of Quaternions, just not how to use them to express orientation not rotation.
As part of a project I need an encoder to determine the angle that this, Continuous Rotation Servo - FeeTech FS5103R, has rotated through. The resolution needs to be fairly high as I will be using it to automate a process and so I want to make sure it's accurate. The shaft it will be mounted to will be custom built so I'm just looking for standalone encoders right now. What are the pros and cons of different styles of rotary encoders?
Tried searching for this answer in a simple way but can't seem to find the answer.. I have a 3 axis Bipolar Nema 17(Took off my 1st 3d printer =) ) robotic arm(Homemade) and provides 28oz-57oz of Torque as per datasheet, now i am trying to figure out how much that can lift in Grams/lbs Can anybody help?? Or if someone can give me a REALLY Simplified version on how to calculate and get the answer myself, which would be useful for future projects. Thanxs P
So let's say I have a three degrees-of-freedom robot with twists ${\xi}_{1}$, ${\xi}_2$, and ${\xi}_3$. The spatial Jacobian is given by $$ J = \begin{bmatrix}\xi_1 & Ad_{g1}{\xi}_2 & Ad_{g12}{\xi}_3\end{bmatrix} $$ I know that $$ Ad_{g1} = \begin{bmatrix}R_1 & p \times R_1\\ 0 & R_1\end{bmatrix} $$ However I am not sure how to calculate $Ad_{g12}$. Do I multiply $Ad_{g1} *Ad_{g2}$ or do I get the Transformation matrix of $\xi_1$ and $\xi_2$ and then use the formula for the adjoint?
I'm new to the field of robotics, so this is a pretty basic question: I'm assigned with the task of constructing a linear state space model of a 6DOF robotic arm that moves in 3D space. I believe the goal is to transform the original nonlinear problem to a piecewise linear problem for the purpose of designing a controller based on gain scheduling for example, and thus making use of the powerful linear control techniques and design tools. I'm in the process of linearizing the dynamics using the small disturbance theory, but it is a lengthy, time-consuming process that I feel is not very practical considering the alternatives. My question is: Is this a common practice in robotics? If not, is it practical and has it ever been done before, or should I consider an alternative approach?
I am looking forward for a method to sense in real time, using computer vision for a control application. For this I am unsure of which is the right platform . I am well versed with MATLAB, but I am wondering if it will be able to do the image processing in real time or not. If there are other platforms which are much quicker, you can please recommend me.
The flight controller is setup using LibrePilot, no propellers are fitted, when I put the throttle of the transmitter to full the motors go to full RPM and then slowly decrease to zero RPM with no change in the position of the throttle. This repeats over time. Check this video for visual symptoms: Quad motors slowly decrease RPM over time with full throttle. This also happens when all the props are fitted.
I am developing an RRT (rapidly exploring random tree) for car-like robots in SE2 space using Dubins steering function and have a question that has implications on the performance of RRTs. In order for an RRT to be performant, an efficient nearest neighbor data structure needs to be used. There are efficient nearest neighbor data structures for metric spaces (like Euclidean space), however, none that I know of for a non-metric space (like the Dubins space). This leads me to wonder if I can use a different distance function than the Dubins curve length in my RRT despite using the Dubins steering function to connect states.
I want to build a robot, but I don't know if I want to build a quadrapod or a hexapod. I would like to use three servos/leg. Can you tell me what are the pros and cons of quadrapods compared to hexapods?
How can you control a servo driver (delta, ...) with an industrial PC? To control position and velocity of a servo you need PWM signal to the servo drive (amplifier), but how do we create the signal and using which component? Would a 555 timer be sufficient?
Brushless motors, like regular motors, have different parts (commutator, stater, etc). What do these various parts weigh?
Would a 12V geared DC motor (about 140rpm) (with an encoder) PID controlled as a servo give a study response when subjected to a load?
My goal is to locate precision within a known map. If the size of the map is known, a rectangle 4 x 3 m, and I know the initial position and orientation of the robot, after a certain period of time here moved randomly, can you know at what point of the map I am? I use the MPU 9250 sensor, gyro, accelerometer and magnetometer sensor.
I have a 9.8kW Component that runs at 51.8 Volts. That means I need a battery that has about 9.8kWh of energy to run the component for an hour For the sake of simplicity, say I have some batteries that just happened to match that voltage (51.8V) simply laying around. How many mAh would that battery need to be to run this for an hour? My instinct is to simply say 98 Ah, but this doesn't seem right..
I have a Bugs 3 quadcopter, can I use a better transmitter from a helicopter for instance, a 2.4 GHz?
For a while now I have been trying to build my own robotic platform, kind of like a turtlebot. However, I have kept running into an issue with controlling the motors. My main controller for sensor data input and motor driving output is the tiva c TM4C123G launchpad, which operates at 3.3V so I shift the signals to 5V (which the motor driver operates on). The motor driver seems to respond correctly when the actual motor is not attached, but the LEDs that indicate the ouput do not shine when the motor is plugged in. Here is a video which explains my problem: https://www.youtube.com/watch?v=j-Da2iAS8L8 I have tried using an arduino mega instead of the launchpad, but unfortunately the 64 CPR encoders are too fast for the mega to read and I need the encoder data (I'm trying to perform SLAM). I honestly have no idea what is wrong with my circuit/boards I appreciate any help or insight on this problem! Thank you! Since I'm also a high schooler I unfortunately do not have access to nicer tools such as oscilloscopes...
In undistortPoints function from OpenCV, the documentations says that http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistortpoints where undistort() is an approximate iterative algorithm that estimates the normalized original point coordinates out of the normalized distorted point coordinates (“normalized” means that the coordinates do not depend on the camera matrix). It seems that the normalized point coordinates is obtained by adding 1 to the third coordinate. What does normalized point coordinates means? How can it be used for? In the above, there are two lines x" = (u - cx)/fx y" = (v - cy)/fy Is there one term for the coordinates(x'', y'')?
For the shown general serial link n-DOF robotic arm the joint inertial positions are given by $p_i$, where $i=1,...n$: I learned that the joints' inertial positions can be calculated in one of two ways: 1- $p_i=r_o + b_o + \sum _{k=1}^{i} (a_k+b_k)$ 2- As the first three elements of $\bar{p}_i$, where: $\bar{p}_i=T_o$ $^oT_1\text{...}$ $^{i-1}T_i$ $\bar{p}_0,$ $^{i-1}T_i$ is the homogeneous transformation matrix from coordinate $(i)$ to coordinate $(i-1)$, and $\bar{p}_0=\left( \begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \\ \end{array} \right). [1]$ By definition, the two methods should yield identical results. The problem is that they don't for a $\textbf{ wrist-partitioned manipultor}$. $\textbf{My question is: Where could I be going wrong?}$ In the following I describe the specifics of my calculations: 1- $a_i$ is calculated as $a_i=A_o$ $^oA_1 ...$ $^{i-1}A_i$ $^ia_i$ (where $^{i-1}A_i$ is the rotation matrix). $b_i$ is calculated in a similar manner. 2- The homogeneous transformation matrix is $^{i-1}T_i=\left( \begin{array}{cccc} \cos \left(\theta _i\right) & -\sin \left(\theta _i\right) & 0 & a_i \\ \cos \left(\alpha _i\right) \sin \left(\theta _i\right) & \cos \left(\alpha _i\right) \cos \left(\theta _i\right) & -\sin \left(\alpha _i\right) & -d_i \sin \left(\alpha _i\right) \\ \sin \left(\alpha _i\right) \sin \left(\theta _i\right) & \sin \left(\alpha _i\right) \cos \left(\theta _i\right) & \cos \left(\alpha _i\right) & d_i \cos \left(\alpha _i\right) \\ 0 & 0 & 0 & 1 \\ \end{array} \right)$ Where the rotation matrix,$^{i-1}A_i$, is the $3\times{3}$ top-left matrix. And $T_o$ is given by: $T_o=\left( \begin{array}{cc} A_o & r_o \\ 0 & 1 \\ \end{array} \right)$ Where: $A_o=\left( \begin{array}{ccc} \cos \left(\theta _{b_z}\right) & -\sin \left(\theta _{b_z}\right) & 0 \\ \sin \left(\theta _{b_z}\right) & \cos \left(\theta _{b_z}\right) & 0 \\ 0 & 0 & 1 \\ \end{array} \right).\left( \begin{array}{ccc} \cos \left(\theta _{b_y}\right) & 0 & \sin \left(\theta _{b_y}\right) \\ 0 & 1 & 0 \\ -\sin \left(\theta _{b_y}\right) & 0 & \cos \left(\theta _{b_y}\right) \\ \end{array} \right).\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \left(\theta _{b_x}\right) & -\sin \left(\theta _{b_x}\right) \\ 0 & \sin \left(\theta _{b_x}\right) & \cos \left(\theta _{b_x}\right) \\ \end{array} \right)$ represents the rotation of the base. For fixed base systems $A_o$ and $T_o$ are identity matrices. $\underline{EDIT}$: My robot model and the DH parameters I obtained for it are shown below: $ \alpha =[0^{\circ},90^{\circ},0^{\circ},-90^{\circ},0^{\circ},-90^{\circ}] \\ a=[0,0,a_2,0,0,0] \\ d=[d_1,d_2,0,d_4,0,0] \\ \theta =[\theta_1(t),\theta_2(t),\theta_3(t),\theta_4(t),\theta_5(t),\theta_6(t)] \\$ $\text{Where:} \\ $ $d_2=L_2 \\ d_1=L_1 \\ a_2=L_3 \\ d_4=L_5 $ $[1]$ Liu Haitao, Zhang Tie, "A New Approach to Avoid Singularities of 6-DOF Industrial Robot"
Does the robot described by Double Integrator Model is holonomic ? Let's say we have a robot with dynamics described by equations \begin{cases} \dot x = v, & \\ \dot v = \frac {1}{m}u \end{cases} Where, $x$ is the position of the robot, $v$ is the velocity and $u$ is robot's control input. Can we call this robot a holonomic robot ?
A couple of years ago I bought a WLToys V222 quadcopter. I would like to know if there is a way to use the transmitter to control some DIY projects? Does anyone know how I could do that?
In camera imaging, there are several terms for point coordinates. World coordinates: [X, Y, Z] in physical unit Image coordinates: [u, v] in pixel. Do these coordinates become homogeneous coordinates by appending with a 1? Sometimes in books and paper it is represented by [x, y w]. When is w is used? When is 1 used? In the function initUndistortRectifyMap, http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#void%20initUndistortRectifyMap(InputArray%20cameraMatrix,%20InputArray%20distCoeffs,%20InputArray%20R,%20InputArray%20newCameraMatrix,%20Size%20size,%20int%20m1type,%20OutputArray%20map1,%20OutputArray%20map2) the following process is applied Is there one term for the coordinates [x y 1]? I don't understand why R can be applied to [x y 1]? In my view, R is the transformation in 3D. Is [x y 1] is one 2d point or one 3d point? [u v]->[x y]->[x y 1]->[X Y W]->[x' y'] The coordinates are processed according to the above chain. What is the principle behind it?
I asked the following question in math.stackexchange, but realized that this might be the more appropriate place to post it: How did Thrun derive the following formula: (Context here) I think that this formula might either be wrong or make assumptions that are not guarantueed to be true (like statistical independence), but am not sure which ones exactly. Any pointers are welcome :)
I am building a small robot on wheels. The robot will be moving at a small speed in a square area of known dimensions. I need to locate the robot and know its position at any given moment so I can correct its trajectory. I can add parts to the robot, like some sort of flag or lighting object. It's preferable not to put any border around the area, since it would limit the flexibility of the project. Added Info The size of the robot is about 28 x 25 x 11cm. It will go on land and the surface will be ideally flat (but since the friction between the surface and the wheels can vary in different places, I need to know if the robot arrived to the destination or not, and corrections need to be done respectively). I can place beacons around the field or any sort of sensors, as long as I can remove them easily and transport them. The surface will be about 1.5x1.5m, depending on the range of the sensors, or the method used for localization. The robot can start from a determined position. Some methods I thought of triangulating the position using two ultrasonic sensors placed at the two edges of the field. Although I am not sure if the US sensors will provide a sufficient angle and range to cover the entire area. I am also concerned about the precision. Someone pointed to me that I could use two infrared sensors placed at the two edges, and make them turn 90deg both ways and scan the area. As soon as they find something, the position of the object can be found using triangulation. Although I still don't know if these methods work well for a moving object. If you think that one of the methods that I described is the way to go, please give me more insight on how to implement it in my project. Here is an illustration of the path that the robot should go along. The basic idea of the project is that the robot should sow seeds at regular intervals, for example every 10cm; this is why I need to know if the robot really covered 10cm after the seed before, and if it has to turn. I thought of making an imaginary grid: I tell the robot the position that it has to reach, then knowing the position of the robot I can make it turn until it is pointing in the direction of the arriving point and then it covers the distance between it's position and the arriving point. I am very new to robotics, therefore I would really appreciate a detailed answer.
How to control an industrial servo motor? The specific motor dive I am looking at is a Delta adsa-a2 series AC Servo Drive. I know that we can connect the motor driver to the software that the company provides; but, i want to control it using a code written in c++ or c. I see it has a USB connector What data should i send?
Does the robot described by Double Integrator Model is holonomic ? Let's say we have a robot with dynamics described by equations \begin{cases} \dot x = v, & \\ \dot v = \frac {1}{m}u \end{cases} Where, $x$ is the position of the robot, $v$ is the velocity and $u$ is robot's control input. Can we call this robot a holonomic robot ?
I am fairly knowledgeable about robotics programming, and stuff like that; but, I am a total ignorant about how to actually "build" a robot. My question is: What materials do you recommend to build a robot and where can I buy them. It does not matter what kind of robot. I know that a robotic arm is very different from a mobile robot. I know the theory. Other than Lego bricks (which I have heard of), I am imaging silicon frames... I really don't know. I have developed some very fair and nice devices with image processing, computer vision and stuff. I also know how to program several kinds of motors etc. I just don't know how to put all of this inside a nice frame and show it to the world
I'm following these instructions on how to install Indigo: http://wiki.ros.org/indigo/Installation/Ubuntu When I enter the command: sudo apt-get install ros-indigo-desktop-full I get the following output: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ros-indigo-desktop-full : Depends: ros-indigo-desktop but it is not going to be installed Depends: ros-indigo-perception but it is not going to be installed Depends: ros-indigo-simulators but it is not going to be installed Depends: ros-indigo-urdf-tutorial but it is not going to be installed unity-control-center : Depends: libcheese-gtk23 (>= 3.4.0) but it is not going to be installed Depends: libcheese7 (>= 3.0.1) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. According to the instructions I am following, unmet dependencies should be solved with either the command sudo apt-get install xserver-xorg-dev-lts-utopic mesa-common-dev-lts-utopic libxatracker-dev-lts-utopic libopenvg1-mesa-dev-lts-utopic libgles2-mesa-dev-lts-utopic libgles1-mesa-dev-lts-utopic libgl1-mesa-dev-lts-utopic libgbm-dev-lts-utopic libegl1-mesa-dev-lts-utopic or sudo apt-get install libgl1-mesa-dev-lts-utopic However when I try to use these commands I get unable to locate package errors such as E: Unable to locate package libgl1-mesa-dev-lts-utopic I'd appreciate any advice on how to solve these dependency issues. I've already asked on ROS Answers but received no answer, and the technician in charge of the computer lab I'm working in hasn't yet found a solution, so... help me stack exchange, you're my only hope. P.S. I'm assuming that this question goes in the robotics section because it's related to ROS, but if this belongs in the Ubuntu or main stack overflow section please let me know. Edit: things I have tried so far 1. sudo apt-get update and sudo apt-get upgrade, changed nothing. 2. When I type in apt-cache libgl1-mesa-dev-lts-utopic I get E: Invalid operation libgl1-mesa-dev-lts-utopic 3. When I type in apt-cache search libgl1-mesa-dev-lts-utopic I get no output. 4. When I type in ls /etc/apt/sources.list.d I get ros-latest.list ros-latest.list.save
I want to recover the trajectory of a vehicle using a monocular camera via the computation of the essential matrix between t-1 and t. When I used OpenCV, I got correct trajectory wrt the ground truth. However, I want to code all the functions in Matlab but I got garbage when I plotted the trajectory and I think it is related to a scale factor problem. In fact, the outputted essential matrix from opencv function is the following (between two consecutive frames) $$ E = \begin{bmatrix} 0.0052 & -0.7068 & 0.0104\\ 0.0052 & -0.7068 & 0.0104\\ 0.7063 & 0.0050 & -0.0305\\ -0.0113 & 0.0168 & 0.0002\\ \end{bmatrix}$$ After decomposing it into Rotation and translation and triangulating for 5 2D points, I got the following 3D points: $$ X1 =\begin{bmatrix} -0.0940& 0.0478& -0.4984\\ -0.0963& 0.0497& -0.4987\\ 0.3033& 0.1009& -0.5202\\ -0.0065& 0.0636& -0.5053\\ -0.0737& 0.0653& -0.5011\\ \end{bmatrix}$$ Now, for the essential matrix outputted by Matlab functions, it is the following: $$ E2=\begin{bmatrix} -0.2153 & 0.9573 & 0.1626\\ 0.8948 & 0.2456 &-0.3474\\ 0.1003 & 0.1348 &-0.0306\\ \end{bmatrix}$$ When decomposing it into rotation and translation as well, and triangulating points, I got the following 3D points: $$ X2 =\begin{bmatrix} 0.1087& -0.0552& 0.5762\\ 0.1129& -0.0578& 0.5836\\ 0.4782& 0.1582& -0.8198\\ 0.0028& -0.0264& 0.2099\\ 0.0716& -0.0633& 0.4862\\ \end{bmatrix}$$ To verify the results, I did: $$ X1./X2 =\begin{bmatrix} -0.8644 & -0.8667 & -0.8650\\ -0.8524 & -0.8603 & -0.8546\\ 0.6343 & 0.6376 & 0.6346\\ -2.3703 & -2.4065 & -2.4073\\ -1.0288 & -1.0320 & -1.0305\\ \end{bmatrix}$$ And an almost constant scale factor seems to exist between the first and second estimations. I think that the scale factor have to be the same for all 3D points to get a correct trajectory when plotting it. How to maintain the scale factor?
I need help finding a linear actuator. I only need it to move 15 mm but it also needs to move extremely fast. Preferably 1 inch per second. It needs to be capable of holding a 10 lb weight. The stroke speed is at full load. Also, I would greatly prefer it to be fairly cheap because I need to buy 4.
My setup is such that my tools reside on a wall and table directly adjacent to my work bench. I wish to build an arm that will ultimately grab tools and give them to me without me having to move from my bench. I'm wondering if a scara or articulated robotic arm is better suited for this task. Payloads shouldn't exceed 5kg and distance is not fully a factor since the robot might ride a track
In papers the terms object pose estimation and object tracking are used side by side with different meanings. Can somebody explain me the difference?
I have an IMU that is outputting the following for its measurements: accelx= 0.000909228 (g's) accely= -0.000786797 (g's) accelz= -0.999432 (g's) rotx= 0.000375827 (radians/second) roty= -0.000894705 (radians/second) rotz= -0.000896965 (radians/second) I would like to calculate the roll, pitch and yaw and after that the orientation matrix of the body frame relative to the NED frame. So I do roll = atan2(-accely,-accelz); pitch =atan2(-accelx/sqrt(pow(accely,2)+pow(accelz,2))); sinyaw = -rotycos(roll)+rotzsin(roll); cosyaw = rotxcos(pitch)+rotysin(roll)sin(pitch)+rotzcos(roll)*sin(pitch); yaw = atan2(sinyaw,cosyaw); and I get: roll = 0.000787244 pitch = -0.000909744 yaw = 1.17206 in radians. However the IMU is also outputting what it calculates for roll, pitch and yaw. From the IMU, I get: roll: -0.00261682 pitch: -0.00310018 yaw: 2.45783 Why is there is a mismatch between my roll, pitch and yaw and that of the IMU's? Additionally, I found this formula for the initial orientation matrix. Which way of calculating the orientation matrix is more correct. R1(roll)*R2(pitch)*R3(yaw), or the form above?
I am designing a robot which irons and folds clothes autonomously. For this purpose, I need the robot to detect a certain number of key points in a given cloth in order to execute a certain folding or ironing algorithm. The following images best describe what I actually mean by key points ;, For example for a T-shirt, the above image shows the desired key points. For example for a trouser, the above image shows the desired key points. My first thoughts: My first plans were to use neural networks detect various key points after recognising if the cloth is a t-shirt, trouser etc. But I believe, the problem with this approach is that collecting a lot of data sets to train the neural network to detect these key points is a massive task. Furthermore, I am not even sure if neural networks can produce the results I am looking for because generally, neural networks work well differentiating well defined varying classes of items like for example cats and dogs. So my question is, is there a better way to achieve what I am trying to do? Any help is appreciated thank you. UPDATE Ideally, what I want to do is identify or recognise each of these key points precisely; meaning, for example, the system should know where the collar region for a t-shirt and track its location at all times. I need to know what each key points are (is it a zip area or shirt's shoulder or a collar region) in order to execute a certain folding algorithm. So in this case, I believe, convexity defects do not work.
I have a body with origin at point A which is represented in homogeneous coordinate of matrix 4x4 and I would like to rotate it around an arbitrary vector in 3D space. I understand how to rotate the body around its own frame axis, but I'm not sure how to rotate it around any vector in space.
Why are there no simple industrial robots (mainly for simpler tasks like pick and place) that are: Cheap (<=3000$), Lightweight (<=10kg), Fast (600mm/s at end effector), Most of the robots I have looked at cost more than 10K$. Many of cheap ones use 3D printed (plastic) components. I'm not sure whether these robots can survive the continuous and long operating hours in factories. What's the bottleneck developing such a system? Why haven't established companies not entered this market? Could someone give some insight on this?
I'm working on a self-balancing robot project and am in the process of motor sizing. I've read that with self-balancing robots the motors need to switch direction quickly, which can cause them to briefly draw more than twice the stall current if changing from full speed in one direction to full speed in the other direction. So I'm wondering if, when I find a motor with a suitable rated torque to drive the robot, would I need to work out if the battery can supply a current twice (or more) times the stall current of the motors/how long it would last before being drained? e.g. Motor Stall Current: 5000mA = 5A 12v Ni-MH Battery: 3800mAh = 3.8Ah $$I_{stall} = (5A*2) = 10A \,\,\,\,\,(2\, motors)$$ $$C_{battery} = 3.8Ah$$ $$$$ $$I_{stall} = \frac {C_{battery}}{t}$$ $$t = \frac {C_{battery}}{I_{stall}}$$ $$t = \frac {3.8Ah}{10A} = 0.38h$$ So the battery could supply twice the stall current for 0.19h. Am I understanding this correctly?
I am trying to create the following set up as a project in education: QAV250 quad with CC3D controller 2xBBC Micro:bits used as the receiver, giving out 50 Hz PWM signals to the CC3D controller - one does throttle and yaw, one does pitch and roll. 2xBBC Micro:bits sending out controlling signals I can get past the receiver setup wizard and even arm the quad, however I'm getting erratic behaviour, particularly for the throttle. Despite commanding it low, the throttle sometimes remains high. I don't know if this is linked but on the receiver calibration page of LibrePilot, the input values only update every second (PC plugged into the CC3D). Should these update more frequently? Also, the image of where the sticks are during calibration rarely moves. Possible reasons I have thought of are: BBC Micro:bit runs at 3 V. Is this insufficient for the PWM signal (although I just tried with a Raspberry Pi and Adafruit PWM adaptor and the same laggy behaviour shows on LibrePilot) Is there a syncing issue in the PWM? (i.e. two Micro:bits outputting PWM signals which aren't synchronised). Unfortunately the Micro bit doesn't reliably cope with four simultaneous PWMs, hence the need to use two. My goal is to show the power of the Micro:bit, despite the fact it's aimed at children, but is this just too much for it?!
I am relatively new to using drones and was wondering if someone more experienced in the topic could lend a hand. I am familiar with C++ and OpenCV, a facial recognition software that has libraries in C++. I would like to be able to use the software to control the drone (engaging it/turning off or on. Flight patterns would be a plus). In other words I need someone to point me in the right direction to finding the right materials so that my C++ code can control the drone. If anyone can point me to the reading materials or specific drones to purchase I would be extremely grateful. Note, I used quadcopter as a tag because I did not have enough reputation to use drone
I want to plot the path of a vehicle via the estimation of egomotion based on essential matrix. Everything was fine with openCV and the following function. function [ xnow] = estimate_pose_test( points1, points2, K,xLast ) %%%%OpenCV%%%%% E = cv.findEssentialMat(points1, points2, 'CameraMatrix',K, 'Method','Ransac'); [R, t] = cv.recoverPose(E, points1, points2,'CameraMatrix',K); ry= asin(R(1,3)); u=[t(1,1);t(3,1);ry]; %%%%%%%%%Opengv%%%%%% % for i=1:size(points1,2) % I1(:,i)=points1{i}'; % I2(:,i)=points2{i}'; % end % temp = K \ [I1; ones(1,size(I1,2))]; % I1_norms = sqrt(sum(temp.*temp)); % I1n = temp ./ repmat(I1_norms,3,1); % % temp = K \ [I2; ones(1,size(I2,2))]; % I2_norms = sqrt(sum(temp.*temp)); % I2n = temp ./ repmat(I2_norms,3,1); % % X = opengv('fivept_nister_ransac',I1n,I2n); % R = X(:,1:3); % t = X(:,4); % ry= asin(R(1,3)); % u=[t(1,1);t(3,1);ry]; % theta=xLast(3)+u(3); if(theta>pi) theta=theta-2*pi; elseif(theta<-pi) theta = theta+2*pi; end s = sin(xLast(3)); c = cos(xLast(3)); % actual value added with the new control vector xnow=[xLast(1:2)+[c s; -s c]*u(1:2);theta]; end points1 and points2 are corresponding SURF features. K is internal calibration matrix. However, I want to use OpenGV librairy. OpenGV expects normalized coordinates on the unit sphere, so I started by transforming the measurements as recommended in the previous link and shown in the commented part in the above function. The plotted path was totally wrong and the results between Opencv and Opengv are different. For example for the same two consecutive frames, from opencv, I obtained the following rotation and translation: R1 = 0.9999 0.0016 -0.0153 -0.0017 1.0000 -0.0054 0.0153 0.0055 0.9999 t1 = 0.1159 0.1042 -0.9878 And with OpenGv, R2 = 0.9998 -0.0059 -0.0167 0.0060 1.0000 0.0050 0.0166 -0.0051 0.9998 t2 = 0.2776 -0.0771 0.6458 There is not even a constant scale factor between t2 and t1. Where is the problem? in plotting the resuts or in the estimation itself? Edit I know that the function I wrote dosen't really make sense but it is just a test function to illustrate the problem I faced with OpenGV. First, what I want to do is to track the state of a vehicle defined by a reduced state vector $q_k=(x,z,\theta)$ in order to estimate, afterwards, the uncertainty on the position, because I'm working with a probabilistic approach where: $q_{k|k-1}=f(q_{k|k-1},u_{k-1})$ So, I need the values of control vector $u$ that's why I'm deriving $dx$, $dz$ and $d\theta$ from the outputted $R$ and $t$. Then, I'm plotting the path from the control vector values just to verify that i'm deriving them correctly by comparing the plotted path to the ground truth. Regarding the coordinate reference system, I'm using the following definition: so the vehicle is moving in the $XZ$ plane and the rotation is around the $Y$ axis that's why I'm using $t(1), t(3)$ as position and $ry$ as rotation in the the control vector.
I've seen with drones that the norm of programming them is to create three loops, for each axis, but I'm not quite sure what a control loop is when the code must run in a sequential manner. Programmatically, what IS a control loop?
There a two controllers named $H_2$ and $H_\infty$. Can someone explain if: They have have guaranteed stability margins? Do they have a Kalman filter? Are they for both SISO and MIMO systems? Can I use them with LQG? Which one is best? If you know how to create a basic $H_2$ and $H_\infty$. Can you please show me step by step?
I am programming my Create 2 with Arduino Uno. It worked fine a few weeks ago, then I just left it there, charging it about once a week. Now when I try to play with it again, I find that it starts to send weird sensor data. For example, the value of light bumper data (package ID 45) supposes to be within the range of 0-127. It used to be fine. But now I receive 254 when there is no obstacle in front of the robot. Same issue happens to many different sensor readings. Can someone help me with this problem?
Do other Roombas have an Open Interface like the Create2? Can I send commands to my 780 like the Create 2? I am trying to control my Roomba 780 I made a 8pin FDTI connector to USB and my roomba beep once when plugged in. The logic level is 3.3V Here is my python code: #!/usr/bin/env python import serial import time #start usb connection able with Roomba usbCom = serial.Serial(port='/dev/ttyAMA0', baudrate=115200) #initialze roomba and put it in safe mode usbCom.write(bytes.fromhex('80')) time.sleep(.1) usbCom.write(bytes.fromhex('83')) time.sleep(.1) #spot mode clean usbCom.write(bytes.fromhex('87')) I've sent my serial data from my Raspberry PI to my Roomba but it is not responding, it seems to stop at the first line when I send a byte.
I’m building a robot by raspberry pi and arduino . I want it can charge automatic.I mean the robot can find the charger station by itself. But I don’t know what kind of sensor can make it true. Wifi and bluetooth not a good choose. So any ideas can make it true? Thank you!
I have a mobile robot which receives GPS position (lat/lon) and has an IMU for handling gaps in GPS service. I want to be able to conduct short distance planning in a Cartesian plane, but the robot will ultimately be traveling over long distances. Most references I have found describe using a tangential North-East-Down (NED) frame starting at the robot initial position for local planning. This is fine but I am not sure how to go about updating this plane as the robot moves. If I was to update (change the origin) for the frame every 5 min, then I would need to compute many new transformations at this time and potentially introduce a repeating lag in the system. How can I avoid this?
I am not an expert on this topic and I know this is kind of old thread but im facing the same issue and I would like some help or advice. I am using and Arduino and a Roboclaw 2x7A (old version) At first I was also stopping the motors using roboclaw.SpeedAccelM2(address, 0, 0); on each one and it worked, but later I saw it is still consuming current. I used your suggestion and it works, but only for M1, M2 doesn't seem to stop recieving current, less than an ampere, but i would like it to be zero. I am uploading a piece of code so I can explain myself and to show you what I am doing. This is the part where it checks if error (difference between origin and goal) is greater than deadzone, if greater then keep moving, if lower, it would stop by itself but also it must stop giving current to that motor. // M1 > Azimut //M2 > Zenith if (abs(error1) > deadzone1) { roboclaw.SpeedAccelDeccelPositionM1(address, 0, 0, 0, posicionM1, depth1); } else { Serial.println("Azimut - error es menor que deadzone"); roboclaw.DutyM1(address, 0); } if (abs(error2) > deadzone2) { roboclaw.SpeedAccelDeccelPositionM2(address, 0, 0, 0, posicionM2, depth2); } else { Serial.println("Zenith - error es menor que deadzone"); roboclaw.DutyM2(address, 0); } And this is the output (with some extra info) Azimut - error es menor que deadzone Zenith - error es menor que deadzone Encoder1:15531 80 Speed1:0 Temp: 47.10 error : -24 Encoder2:15474 80 Speed2:0 Temp2: 0.00 error : -81 Pos Obj - M1 : 15555 Pos Real - M1 : 19.97 azimuthGlobal - M1 : 20.00 Pos Obj - M2 : 15555 Pos Real - M2 : 19.90 zenithGlobal - M2 : 20.00 PWM Zen : 0 Azi : 0 switch Pin Zen : 0 Azi : 0 e-Stop : 0 total revs Zen : 280000 Azi : 280000 Corrientes Zen : 0.06 Azi : 0.00 At the end, as you can see, Zen motor is still recieving 0.06 A, but the other motor is fully stopped. Why does this only works with one of them?, Or. Is there some configuration I am doing wrong?.
I've been a fan of first person view drone Racing for years now and, in thinking of building a new quadcopter, I have a question I'm unable to resolve myself. Please correct me if I'm wrong: As i understand, a brushless motor has a maximum torque and power (Watts), if you push it beyond its limits it will start heating and eventually burn. A brushless motor will always try to match the speed required drawing as much amps as is needed so increasing the power used. Any brushless motor has a Kv value related to the coils windings that determines the rpm it will get without load per volt applied. Then, can I say, for example, 1000 Kv at 1 volt will be equivalent to 500 Kv at 2 volts in terms of power, rpm and torque? So could I say it will generate the same lift? Of course more voltage will reduce the amps but that's not related to the question, I think. If this is true, I can't understand why actual drones do not run on bigger Kv and fewer cells since cells add weight and Kv not. EDIT: As a example of my question the DRL RacerX has broke the speed world record using T-motor f80 wich is a 2408 at 2500kv the manufacturer says it should be run at 16V achieving 680W but the 1900kv says 22V around 1000W. The DRL used it at 42V (the 2500kv version) still with a 5 inch prop (Sure the motor will not survive for a long time), they just seemed to increase the amp rating of the controllers and just hope the motor will last long enought. Why did they use such a high voltaje (2 x 5s) since that will not only increase the total weight but that will also increase the amps needed (as its increasing the load). Im sure they had done their research and made the best they could but i want to understand why is that the best way of achieving it, the main question that comes to my mind is why not a larger prop? it should be much more efficient right? and still that drone is for a max speed record maneuverability is not needed
I am considering designing an equatorial mount for astrophotography. I have been researching on the best way to do that and I am thinking on using a stepper motor with a gearbox. The thing is that it needs to turn at a really low speed, 1 rev/day. If I pick any Nema X motor I can get it to step 200/400 steps without a gearbox. The most reductions I have found are 1:100 (this is an example: http://www.ebay.com/itm/Nema17-23-34-Planetary-Gear-Ratio-5-1-10-1-15-1-20-1-30-1-40-1-50-1-100-1-Reduce-/201844830182?var=&hash=item2efee3afe6:m:mPuljSiQP2wMMZ69Cmf4BYA) That means that I can get 20.000 steps/rev (400 step/rev motor) which would mean running the motor at 20.000/60*24 = 14 steps/min --> 4.34 sec/ step. I don't think the operation is smooth enough so it will might show in the pictures. I have been researching on other similar projects and I have seen that they picked 1rpm as acceptable for that smoothness (0.15sec/ step for a 400 step motor). That means that I need a gearbox capable of around 1500:1. My questions is as follows: - Does it make sense to add 2-3 gearboxes in series to get that value? Like those ones and adaptors for the output shafts since they are different in size to the input shafts: http://www.ebay.com/itm/NEMA34-NEMA23-Stepper-Motor-Planetary-Gearbox-36-1-30-1-24-1-16-1-10-1-6-1-4-1/122395466919?_trksid=p2047675.c100005.m1851&_trkparms=aid%3D222007%26algo%3DSIM.MBE%26ao%3D2%26asc%3D44757%26meid%3D57a6af176f9b48e0b9f2042efc480bc1%26pid%3D100005%26rk%3D3%26rkt%3D6%26sd%3D122395466919&var=422871734736. An example could be adding first 30:1, second 30:1, 2:1 (1800:1 all together). Would it be better to design a planetary/spur gearbox to satisfy those requirements? Thank you. I hope this is the right forum. I haven't find one suitable for mechanics
I'd like a two-wheeled robot to travel at 1m/s, given a wheel radius of 0.03m. I've calculated the rated speed required by the gearmotor as 318.28RPM: $v = \omega r$, $\displaystyle{\frac{v}{r} = \omega}$ $\displaystyle{\frac{1 \mathrm{\frac{m}{s}}}{0.03 \,\mathrm{m}}} = $ $33.33 \mathrm{\frac{radians}{s}}$ $33.33 \mathrm{\frac{radians}{s}} \left(\frac{1 \,\mathrm{rev}}{2 \pi \, \mathrm{radians}}\right) \left(\frac{60 \, \mathrm{s}}{1 \, \mathrm{min}}\right)$ $= 318.28 \mathrm{\frac{rev}{min}}$ Is this correct? On a side note, would an acceleration value of 0.5m/s^2 be a good choice for the robot, based on the velocity I've chosen? Help appreciated, thanks.
I'm currently working calibrating a robot's tool. I have found some simple methods to get the tool centre point. But I'm unsure about how to get the tool's orientation relative to the robot's tip (the point where the tool is attached to the robot). The assumption here is that the tool-free robot is calibrated. I'm looking for a solution which doesn't use camera or expensive sensors. EDIT: The robot is 6dof. The accuracy I'm looking at is approximately 0.1mm. The programming will be C language (How is this info related to the calibration method?). Could someone guide in this aspect?
I am trying to move a robot in a straight line from point A, to point B. The robot's primary sensor is a Hokuyo URG-04LX-UG01 LIDAR that gives me the magnitude and direction of each point it detects in the form of two arrays. I also have the wheel encoders on each motor of the robot, so I can obtain some odometry of the robot even though its accuracy will diminish over time. The problem that I have is that there are obstacles in the path of point A and point B that the robot must go around. I am not sure how to take the readings from the LIDAR, and convert them into movements that go around the obstacle, while also heading to B. Sorry if that doesn't make sense, but I am really confused on how to solve this problem. Is this an easy problem to solve and I am just over complicating it? Does anyone know of a way to do this? Any help or guidance would be greatly appreciated.
I'm puzzling over the remake of an impact testing machine. It's a sort of a robot arm that pushes a probe against a target to test its characteristics. Right now the machine use air pistons, because it's easier to decide the force to exert. The probe is a ball of about 0.5 kg and 8 cm in diameter. The machine should control three main parameters: speed at the moment of the impact (roughly between 100 mm/s and 20 m/s) force to exert against the target (roughly between 5 N and 100 N) duration of the holding after the impact (between 100 ms and 10+ s) The goal is to try to redesign the machine using only electrical actuators, like steppers or other motors, avoiding the pneumatic systems. I'm looking for some ideas how to design the mechanical device to fit the requirements. Some idea that won't work: using a standard linear actuator or a stepper coupled with an endless screw: you can easily control parameters 1 and 3, but 2 it's very hard to achieve using a cam to accelerate and launch the probe without a hard-link to the mechanism: you cannot control the third parameter and hence neither the second Any thought?
Sorry for asking again, but there is still no solution and I am not able to comment this question from Aaron: Cannot disable sleep in passive mode for iRobot Create 2 Create 2 spec says: In Passive mode, Roomba will go into power saving mode to conserve battery power after five minutes of inactivity. To disable sleep, pulse the BRC pin low periodically before these five minutes expire. Each pulse resets this five minute counter. (One example that would not cause the baud rate to inadvertently change is to pulse the pin low for one second, every minute, but there are other periods and duty cycles that would work, as well.) Here you see the signal at the BRC pin at the connector: Is there a newer firmware? Roomba output: bl-start STR730 bootloader id: #x47135160 6BFA3FFF bootloader info rev: #xF000 bootloader rev: #x0001 2007-05-14-1715-L Roomba by iRobot! str730 2012-03-22-1549-L battery-current-zero 258 2012-03-22-1549-L r3_robot/tags/release-3.2.6:4975 CLEAN bootloader id: 4713 5160 6BFA 3FFF assembly: 3.3 revision: 0 flash version: 10 flash info crc passed: 1 processor-sleep If Roomba is already sleeping it wakes up from the pulse. So there is no chance for not receiving it. Any idea?
I am building a 4 wheel rover and am using PID DC motor control for the back set of wheels. The feedback for the speed is provided by Hall effect sensors, with two pulses per rotation. This means with the motor as 100% duty, the sensor returns a 240Hz signal (4.2ms between pulses) and at the lowest duty for when the wheels are turning 20% duty, the sensor returns a 14.33 Hz signal (74.2 ms between pulses). I'm using a STM32 Cortex-M3 part for the controller and a timer to capture the hall effect input and convert this to frequency. This is done by comparing two capture samples through a ISR that triggers on each rising edge. The issue I have is the PID loop is running at a rate of 50Hz, so when the motor is running at low speeds and a timer compare has not been done, it returns the last calculated value. This is fine, but I could also find myself in the situation where the motor has stopped spinning and I haven't been able to update the captured frequency due to the second pulse not having arrived. Im just wonder if there is a sensible way of handling this ? Thanks.
I'm trying to work out the wheel force and torque required for a TWIP robot, so that I can size a motor. I've calculated a maximum traction force of $\small6.51\mathrm N$. My understanding is that a torque force at the wheels of up to and including $\small6.51\mathrm N$ can be applied, to drive the robot without the wheels slipping. This would give the robot a maximum acceleration of $\small3.92\mathrm{ms}^{-2}$ So, assuming I wanted to achieve the maximum force to drive the robot, and hence the maximum acceleration (assuming pendulum is balanced), I would need a wheel force of $\small6.51\mathrm N$. There is also resistance against the direction of motion/driving force of the robot, in the form of rolling resistance and aerodynamic drag. From what I've read rolling resistance (a type of static friction) is a resistive moment to wheel rotation, which needs to be overcome by the wheel torque force in order to produce acceleration. I've calculated a rolling resistance value of $\small0.16\mathrm N$. The robot is intended for indoor use but in case I take it outside I calculated an aerodynamic drag value of $\small0.14\mathrm N$, using an average wind flow velocity of $\small3\frac{\mathrm m}{\mathrm s}$ for my location. Taking these resistive forces into account I calculated a wheel force of $\small6.81\mathrm N$ and axle torque of $\small0.20\mathrm{Nm}$, for maximum acceleration of the robot. I've considered the maximum torque exerted by the pendulum i.e. when it's pitch angle/angle of inclination is at +- 90° from the stable vertical position at 0°. This torque needs to be matched (or exceeded) by the torque/moment exerted about the pivot by the wheel force, accelerating the robot horizontally. The wheel force and axle torque required to stabilise the pendulum I've calculated as $\small13.7340\mathrm N$ and $\small0.4120\mathrm{Nm}$ respectively, and an axle torque of $\small\approx0.2\mathrm{Nm}$ for one motor. I ignored rolling resistance and aerodynamic drag for these calculations. The motor will be a brushed DC motor, so I think $\small0.2\mathrm{Nm}$ should be 25% or less of the motor's stall torque. Can you please tell me if this is correct? Here are my calculations and FBD:$$$$ Maximum tractive force $\begin{align} F_{t(max)}&=μN\qquad\qquad\qquad\qquad\qquad\qquad Mass\,of\,robot: 1.66\mathrm{kg}\\ &=(0.4)*(16.28\mathrm N)\qquad\qquad\qquad\,\,\,\,Weight\,of\,robot:16.28\mathrm N\\ &=6.51\mathrm N\qquad\qquad\qquad\qquad\qquad\,\,\,\,\,\,Number\,of\,wheels:2\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\,\,\,\,Wheel\,radius: 0.03\mathrm m\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\,\,\,\,Mass\,of\,pendulum:1.4\mathrm{kg}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\,\,\,\,\,Distance\,from\,axle\,to\,pendulum\,COM:0.2575\mathrm m \end{align}$ $F_{t(max)}$: Maximum tractive force $μ$: Coefficient of friction $N$: Normal force at wheel$$$$ Maximum acceleration of robot $\begin{align} a_{r(max)}&=\frac{F_{t(max)}}{m}\\ &=\frac{6.51\mathrm N}{1.66\mathrm{kg}}\\ &=3.92\mathrm{ms}^{-2}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $a_{r(max)}$: Maximum acceleration of robot $F_{t(max)}$: Maximum tractive force $m$: Mass of robot$$$$ Rolling resistance force $\begin{align} F_{rr} &= C_{rr}N\\ &=(0.01)*(16.28\mathrm N)\\ &=0.16\mathrm N\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $F_{rr}$: Rolling resistance force $C_{rr}$: Rolling resistance coefficient $N$: Normal force at wheel$$$$ Drag resistance force $\begin{align} F_{d} &= C_{d}\left(\frac{ρ*v^2}{2}\right)A\\ &=1.28\left(\frac{1.2\frac{\mathrm {kg}}{\mathrm m^3}*(3\frac{\mathrm m}{\mathrm s})^2}{2}\right)0.06\mathrm m^2\\ &=0.14\frac{\mathrm{kg}\cdot\mathrm m}{\mathrm s^2}=0.14\mathrm N\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $F_{d}$: Drag resistance force $C_{d}$: Drag coefficient $ρ$: Mass density of fluid $v$: Flow velocity of fluid relative to object $A$: Reference area/projected frontal area of object$$$$ Wheel force/tractive force for maximum acceleration of robot $\begin{align} F_t-F_{rr}-F_d&=ma_{r(max)}\\ F_t-0.16\mathrm N -0.14\mathrm N &=(1.66\mathrm{kg})*(3.92\mathrm{ms}^{-2})\\ F_t&=(1.66\mathrm{kg})*(3.92\mathrm{ms}^{-2})+0.16\mathrm N +0.14\mathrm N\\ &=6.81\mathrm N \end{align}$ $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$ OR $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$ $\begin{align} F_w&=F_{t(max)}+F_{rr}+F_d\\ &=6.51\mathrm N +0.16\mathrm N +0.14\mathrm N\\ &=6.81\mathrm N\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $F_t$: Tractive force $F_{rr}$: Rolling resistance force $F_{d}$: Drag resistance force $m$: Mass of robot $a_{r(max)}$: Maximum acceleration of robot $F_w$: Wheel force $F_{t(max)}$: Maximum tractive force$$$$ Axle/wheel torque for maximum acceleration of robot $\begin{align} T_a&=F_w r\\ &=(6.81\mathrm N)*(0.03\mathrm m)\\ &=0.20\mathrm{Nm}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $T_a$: Axle/wheel torque $F_w$: Wheel force $r$: Wheel radius (lever arm length)$$$$ Maximum torque exerted by pendulum $\begin{align} T_{p(max)}&=F_p r\\ &=(1.4\mathrm{kg}*9.81)*(0.2575\mathrm m)\\ &=3.5365\mathrm{kg}\cdot \mathrm m\\ &=3.5365\mathrm{Nm}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $T_{p(max)}$: Maximum torque exerted by pendulum $F_p$: Force applied to pendulum $r$: Distance from axle to pendulum COM (lever arm length at +/- 90° )$$$$ Wheel force to stabilise pendulum $\begin{align} T_{p(max)}&=F_w r\\ 3.5365\mathrm{Nm}&=F_w*(0.2575\mathrm m)\\ F_w&=\frac{3.5365\mathrm{Nm}}{0.2575\mathrm m}\\ &=13.7340\mathrm N\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $T_{p(max)}$: Maximum torque exerted by pendulum $F_w$: Wheel force $r$: Distance from axle to pendulum COM$$$$ Axle/wheel torque to stabilise pendulum $\begin{align} T_a&=F_w r\\ &=13.7340\mathrm N*(0.03\mathrm m)\\ &=0.4120\mathrm{Nm}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $\therefore$ $\begin{align} T_{a(one\,motor)}&=\frac{0.4120\mathrm{Nm}}{2}\\ &=0.2060\mathrm{Nm}\\ &\approx0.2\mathrm{Nm}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align}$ $T_a$: Axle/wheel torque $F_w$: Wheel force $r$: Wheel radius (lever arm length) $T_{a(one\,motor)}$: Axle/wheel torque for one motor$$$$ FBD
How can I setup the C++ IDE CLion to display documentation and auto completion correctly when working with ROS?