instruction
stringlengths
40
28.9k
I have a GPS module and an IMU (gyro, accelerometer and magnetometer) and I need to build an autonomous navigation system for a quadcopter. It must know its position at any time so that it can track a predefined path. I know that, in order to improve precision, I need to merge both sensors data through a Kalman Filter (or any other technique for that matter, the thing is that the Kalman Filter is way more common according to my research). The problem is that I am seriously stuck and I know this might be something very simple but I don't seem to find a solution or at least the answer for some of the most basic questions. As a start, I know how to get the position from the accelerometer readings. I have some filters that help eliminate noise and minimize the integration errors. I also have the GPS readings in latitude and longitude. The first question is, during sensor fusion, how can I make both measurements compatible? The latitude and longitude from the GPS won't simply mix with the displacement given by the accelerometer, so what is the starting point for all of this? Should I calculate the displacement from the GPS readings or should I assume a starting latitude and longitude and then update it with the accelerometer prior to applying the filter? I have once developed a simple Kalman Filter in which I could plug the new reading values to obtain the next estimate position of a two wheeled car. Now I have two sources of inputs. How should I merge those two together? Will the filter have two inputs or should I find a function that will somehow get the best estimate (average, maybe?) from the accelerometer and GPS? I am really lost here. Do you guys have any examples of code that I could use to learn? It is really easy to find articles full of boxes with arrows pointing the direction in which data must flow and some really long equations that start to get confusing very soon such as those presented on this article: http://isas.uka.de/Material/Samba-Papierkorb/vorl2014_15/SI/Terejanu_tutorialUKF.pdf (I have no problems with equations, seriously) but I have never seen a real life example of such implementation. Any help on this topic would be deeply appreciated. Thank you very much.
I hope you can help me with my project. I'm using a skid-steering wheeled mobile robot for autonomous navigation and I'd like to find a way to be able to perform path reconstruction in Matlab. By using only the robot encoders (installed on the robot) and the yaw rate information (which come from a very accurate IMU sensor mounted on the robot frame), I can successfully do the path reconstruction. (I'm using XBOW-300CC sensor) The problem is that I would like to try to reconstruct the path by using only the IMU yaw rate and the IMU acceleration values for X and Y axis. I'm able to obtain velocity and distance by integrating two times the IMU acceleration values but my problem is that I don't know how to use this data. Do I have to use a rotation matrix to pass from the IMU frame to the robot frame coordinates? I'm asking this because I use a rotation matrix for the encoder values which come from the robot encoder. At the moment, I use these equations for robot encoders and IMU yaw rate: tetha(i)=(yaw(i)+yaw(i-1))/2*(encoder(i)-encoder(i-1))+tetha(i-1); %trapezoidal integral Rx=[1 0 0;0 cos(-roll_angle(i)) -sin(-roll_angle(i)); 0 sin(-roll_angle(i)) cos(-roll_angle(i))]; Ry=[cos(-pitch_angle(i)) 0 sin(-pitch_angle(i)); 0 1 0; -sin(-pitch_angle(i)) 0 cos(-pitch_angle(i))]; Rz=[cos(-tetha(i)) -sin(-tetha(i)) 0; sin(-tetha(i)) cos(-tetha(i)) 0; 0 0 1]; R2=Rz*Ry*Rx; disp=R2 *[encoder_displacement(i) 0 0]'; X_r(i)=disp(1); Y_r(i)=disp(2); Z_r(i)=disp(3); X(i)=x0+sum(X_r(1:i)); Y(i)=y0+sum(Y_r(1:i)); Z(i)=z0+sum(Z_r(1:i)); Do I still have to use R2 matrix? Thank you a lot
I'm really in doubt whether it is proper to ask this question here, so I'm apologizing if it is not, I'll delete it. I have a Roomba robot which has worked for me for more than three years, and now while it is working it is producing some strange sounds, so I've decided to clean it thoroughly. But when I disassembled it down to this point: I got stuck with these sort of glass things (marked with the red rectangles at the picture). They are really filthy from the inside and I cannot figure out how to clean them. Does anyone know how one can remove dust from the inside on these things? May be there are some Roomba creators here. Thanks in advance.
I'm a CS student trying to implement a clustering algorithm that would work for a set of robots in an indoor controlled environment. I'm still starting on Robotics and don't have much experiencing in figuring out what will work together. My plan is to get 6 of these Zumo robots and plug in a wifi module like the Wifi shield. Then, I would use this to do inter communication and execute my algorithm. My question: Can the wifi module just be plugged in and would it work? If not, how can I go about achieving this task. I see lots of Arduino boards with different names and I'm not sure which works with which, and whether they can be plugged in. Any help would be appreciated.
Is it safe to give 5v through 5v pin of arduino uno r3 while the USB cable is inserted? I have ESCs connected to it which aren't likely to start in other cases. The 5v and gnd is coming from the BEC circuit of a connected ESC. Please help me. Thanks
As in this video: https://www.youtube.com/watch?v=qce5Vguj5Jg In this new version (did not see the learning part in the past versions), with three to four trials, Cubli can learn to balance on a new surface.
We do some experiments of real time representation of sensor position on TV. In this experiments, we used sensors for collect real time position in 3D at 250Hz and TV for Display the sensor position at 60Hz. Also, we used MATLAB and C++ for programming with OpenGL platform. In programming, Every iteration dat display on the TV, erase and draw the circle (Object, which is represent real time position on the display). In this program I collect to only 60 points and loose other 190 points in every second, becuase, I think that refresh rate of TV is 60Hz. I have gone through the thread "How can I control a fast (200Hz) realtime system with a slow (30Hz) system?"(How can I control a fast (200Hz) realtime system with a slow (30Hz) system?), but i don't understand, How to implement two loop on 200Hz and 30Hz. My Question is, How can we implement in MATLAB/C++? So I can store 250 data of sensors as well as 60 points for real time display on the TV. If you help me through pseudo code, I appreciate your help. Thank You in advance. Please help me. P.S. Code %Display main window using Psychtoolbox win=Screen(2,'OpenWindow',[1 1 1]) while (1) % Setup for data collection at 250Hz Error = calllib('ATC3DG64', 'GetSynchronousRecord', hex2dec('ffff'), pRecord, 4*numBoards*64); errorHandler(Error); Record = get(pRecord, 'Value'); %sensor number count=2; evalc(['tempPos(1, count) =' 'Record.x' num2str(count - 1)]); evalc(['tempPos(2, count) =' 'Record.y' num2str(count - 1)]); evalc(['tempPos(3, count) =' 'Record.z' num2str(count - 1)]); % Record X and Y position of sensor 2 if SensorNumAttached(count) % Real time position and minus world origin so that real % time position display on the TV table1(count,1)=(2.54*tempPos(2,count))-X_world_origin; table1(count,2)=(2.54*tempPos(3,count))-Y_world_origin; end % Some conversion for the Pixel to centimeter ratio x_center_new = x_center - (x_ratio * table1(2,1)); y_center_new = y_center - (y_ratio * table1(2,2)); % conversiorn for display circle on the TV, is represent the real time poistion of the sensor x1 =round(x_center_new - R_num_data); y1 = round(y_center_new - R_num_data); x2 = round(x1 + 2*R_num_data); y2 = round(y1 + 2*R_num_data); % Display command for TV. Screen('FrameOval',win,[255 0 0], [x1 y1 x2 y2]); Screen('Flip',win); end
what robotic leg technologies are available. i'm sorry if this is a basic question i am a software developer looking to get into the field of robotics. i am particularly interested in robotic legs that are similar to those used on Boston Dynamics ATLAS robot. what is the mechanism required that allows it to move its joints so quickly. if you see any videos of many of Boston Dynamics robots they make an engine sound (presumably because it uses an engine), but i cant find any details in the configuration that is being used.
I have recently purchased my first ever servo, a cheap unbranded Chinese MG996R servo, for £3.20 on eBay. I am using it in conjunction with a Arduino Servo shield (see below): As soon as it arrived, before even plugging it in, I unscrewed the back and ensured that it had the shorter PCB, rather than the full length PCB found in MG995 servos. So, it seems to be a reasonable facsimile of a bona-fide MG996R. I read somewhere (shame I lost the link) that they have a limited life, due to the resistive arc in the potentiometer wearing out. So, as a test of its durability, I uploaded the following code to the Arduino, which just constantly sweeps from 0° to 180° and back to 0°, and left it running for about 10 to 15 minutes, in order to perform a very simple soak test. #include <Servo.h> const byte servo1Pin = 12; Servo servo1; // create servo object to control a servo // twelve servo objects can be created on most boards int pos = 0; // variable to store the servo position void setup() { servo1.attach(servo1Pin); // attaches the servo on pin 9 to the servo object Serial.begin(9600); } void loop() { pos = 0; servo1.write(pos); // tell servo to go to position in variable 'pos' Serial.println(pos); delay(1000); // waits 15ms for the servo to reach the position pos = 180; servo1.write(pos); // tell servo to go to position in variable 'pos' Serial.println(pos); delay(1000); // waits 15ms for the servo to reach the position } When I returned, the servo was just making a grinding noise and no longer sweeping, but rather it seemed to be stuck in the 0° position (or the 180°). I picked the servo up and whilst not hot, it was certainly quite warm. A quick sniff also revealed that hot, burning motor windings smell. After switching of the external power supply and allowing it to cool, the servo began to work again. However, the same issue occurred a little while later. Again, after allowing it to rest, upon re-powering, the servo continues to work. However, I am reluctant to continue with the soak test, as I don’t really want to burn the motor out, just yet. Is there a common “no-no” of not making servos sweep from extreme to extreme, and one should “play nice” and just perform 60° sweeps, or is the cheapness of the servo the issue here? I am powering the servo from an external bench supply, capable of 3A, so a lack of current is not the issue. Please note that I also have a follow up question, Should a MG996R Servo's extreme position change over time?
This question is a follow on from my previous question, Overheating/Jamming MG996 servo. I have recently purchased my first ever servo, a cheap unbranded Chinese MG996R servo, for £3.20 on eBay. After mounting the servo horn and the bracket, I realised that I had not mounted the horn in a tout a fait 0° orientation, rather the angle between the bracket and the servo side was approximately 20°. However, after switching the servo on and off a couple of times, with each time allowing the servo to perform, say, about 10 sweeps each time, I quickly noted that the servo’s extreme positions were changing over time, so that the initial extremes and then the extremes after about 5 on and off cycles, had changed by about 15°, so that now, 0° and 180° the bracket is now parallel with the body of the servo. I was quite surprised at this, as I had assumed that the 0° and 180° positions would be fixed, and not change over time, or vary each time that it was switched on and off. Seeing as there should be a stop peg on the gear connected to the potentiometer inside, how is this even possible?
I am just starting to explore an idea and I am somewhat of a novice in robotics. I am looking to position a mobile robot as accurately as possible on a concrete slab. This would be during new construction of a building and probably not have many walls or other vertical points for reference. the basic premise behind the robot is to print floor plans straight on to the slab. I will have access to the BIM (building information models, CAD, Revit) files of the building. I want the robot to position itself as accurately as possible on the blank slab using the BIM files as a map. What would be the best avenue to track and adjust positioning of the robot in the open space of a slab? Low frequency, Lidar, wifi? Lastly what sensors would be best?
Including Q, R, and initial states of x and P.
Attempt to clean up: I'm trying to use this motor with this ESC and an Arduino Uno R3. Typically, I used the PWM pins when I use the Arduino and an ESC, but I can't control the motor even if I use the servo library, and I've also tried sample code from different websites. The ESC has a beep I can't understand. Sometimes it's high-low-high or high for 4 seconds, but I can't find anything on Google. Sometimes the motor spins periodically for a short time, but I don't know why. Some sites recommend using flash or bootloader, but I'd prefer to use Arduino PWM or the servo library. Original post Specific ESC is Rctimer Mini ESC16A OPTO SimonK Firmware SN16A ESC.. I can only using ESC(Discussed above..) and RCTimer 1806-1450KV Multi-Rotor BLDC Motor. Typically, I used PWM pins(3, 9, 10, 11-because similar Signal frequency) when using Arduino-ESC.. but, I can control BLDC Motor even i used to Servo library.. I've been used usual websites example code. Just ESC had unknowable beep .. sometime di-ri-di or di(for 4 seconds).. I couldn't find that way.. in google (or my country websites) Sometimes, The Motor spins(In a certain value, periodically) for a short time but I don't know why The motor spins In google sites, just using flash or Bootloader, but I'll use Arduino PWM or Servo.. So.. Please! would you please help me? Thank you for reading my thread.
I'm familiar with using PID to perform closed loop control when there is a single output and a single error signal for how well the output is achieving the desired set-point. Suppose, however, there are multiple control loops, each with one output and one error signal, but the loops are not fully independent. In particular, when one loop increases its actuator signal, this changes the impact of the output from other loops in the system. For a concrete example, imagine a voltage source in series with a resistor, applying a voltage across a system of six adjustable resistors in parallel. We can measure the current through each resistor and we want to control the current of each resistor independently by adjusting the resistance. Of course, the trick here is that when you adjust one resistor's resistance, it changes the overall resistance of the parallel set, which means it changes the voltage drop due to the divider with the voltage source's resistance and hence changes the current through the other resistors. Now, clearly we have an ideal model for this system, so we can predict what resistance we should use for all resistors simultaneously by solving a set of linear equations. However, the whole point of closed loop control is that we want to correct for various unknown errors/biases in the system that deviate from our ideal model. The question then: what's a good way to implement closed loop control when you have a model with this kind of cross-coupling?
Does anyone out there know where I can get the original iRobot Create? The company no longer sells them. It was only 2 years ago that it was sold. It is white and its value is the physical design, that it has a large exposed deck for mounting armatures. It is preprogrammed to operate in different configurations, eg. spinning, figure 8, following the outline of a wall, etc. I have an ongoing art project using this model and as they are in operation everyday, I will eventually need to replace them with new ones. To see a video of one of my projects you can go to https://vimeo.com/119486779 I currently have it working in a spinning motion.
I am trying to build a map containing lamps as landmarks. I drive around with a robot and a monocular camera looking to the ceiling. The first step is detect the edges of each observed rectangular lamp and save the position in pixels and also the current position from odometry of the robot. After the lamp disappears from the field of view, there is enough base-line to do a 3D reconstruction based on structure from motion. Once this reconstruction is done there will be uncertainty in the position of the lamps that can be modelled by covariance. Imagine if the robot was driving for a while, its own position estimated from odometry will also have a relatively high incertitude, how can I integrate all of those incertitudes together in the final covariance matrix of the position of each lamp? If I understand well there would be the following covariances: noise from camera Inaccurate camera calibration matrix inaccurate result from optimization drift in odometry My goal is to manually do loop closure using for example g2o (graph optimization) and for that I think correct covariances are needed for each point.
I have extremely limited knowledge in the general topic of robotics and therefore this question is a shot in the dark. Please let me know if the topic is unsuitable for the site. I am interested in creating a device that would generate a touchscreen tap. In a nutshell, I would like to replicate on a touchscreen the automated mouse functionality you can obtain with software like AutoHotKey in Windows. Since, without jailbreaking the phone, a software solution is basically impossible, it occurs that one of the first components would be a physical device that simulates a tap. Do any options for such a component exist? I recognize that there are philosophical implications with creating such a device. I am assuming the entire conversation to be theoretical and solely related to the hardware design. Thanks, Alex
A have designed a robot to perform tasks in farms. But the problem now is I'm not sure on the best way to supply continuous power to my robot. All the motors are rated at 12V and only Arduino and a few sensors work at 5V or less. Can I continuously charge a 12V lead acid battery with an adapter (comes with the battery) plugged into the AC output of the generator while the robot is operating? Do I have to worry about overcharging the battery? Or should I use the generator's DC output which can supply 12V and up to 8.3Amp. Or is there any other suggestions? Some information about the adapter which are stated on the package: 1. Built-in over-charge protection device. 2. Built-in thermal protection device 3. Output: 6v/12v 2Amp This is the generator that I have: http://global.yamaha-motor.com/business/pp/generator/220v-60hz/0-1/et950/ This is my first robot which is quite big that requires a lot of electrical/electronic knowledge to power it. I do not have a lot of experience in this field. So any feedback is greatly appreciated.
I have a system with two inputs (throttle and brake) and one output (speed). How does one design a controller in such a way that the two outputs of the controller (throttle and brake) are never both greater than zero (so that it doesn't accelerate and brake simultaneously)? Thanks
I have some sensors attached to Arduino Uno r3 and an ESC. I start the Motor attached to ESC through Arduino with no USB connected to laptop. It starts correctly. There is a must that I will have to start the Arduino from non USB supply so that ESC is correctly started, which means that my motor doesn't start with USB connected to PC. Now, how can I get the sensor values to a laptop? If I connect the USB to the PC after starting the motor, will this work?
I want to create a virtual quadcopter model, but I am struggling to come up with a satisfying model for the brushless motors & props. Let's take an example, based on the great eCalc tool: Let's say I want to know how much current is consumed by the motor in a hovering state. I know the mass of the quad (1500g), so I can easily compute the thrust produced by each motor: Thrust = 1.5 * 9.81 / 4 = 3.68 N per motor Thrust is produced by moving a mass of air at an average speed of V: Thrust = 0.5 * rho * A * V² Where rho (air density) is 1.225kg/m3 and A (propeller disk area) is PI * Radius² = 0.073m² (12" props). So I can compute V: V = sqrt(Thrust / 0.5 / rho / A) = 9.07 m/s All right, now I can calculate the aerodynamic power created by the propeller: P = Thrust * V = 3.68 * 9.07 = 33.4 W All right, now I can calculate the mechanical power actually produced by the motor. I use the PConst efficiency term from eCalc: Pmec = Paero * PConst = 33.4 * 1.18 = 39.4W Here, eCalc predicts 37.2W. It's not too far from my number, I imagine they use more sophisticated hypotheses... Fair enough. From this post, I know that this power is also equal to: Pmec = (Vin - Rm * Iin) * (Iin - Io) Where I know Rm (0.08 Ohms) and Io (0.9 A). So, finally, my question: How do you calculate Vin and Iin from here? Of course, if I knew the rotation speed of the engine I could get Vin from: n = Kv * Vin Where Kv = 680 rpm/V. But unfortunately I don't know the rotation speed... (Note that Vin is assumed to be averaged from the pulse-width-modulated output produced by the ESC) Thanks for your help!
Not sure if this has been asked, but there are lots of simulations of bipedal locomotion algorithms online, some of the evolutionary algorithms converge to very good solutions. So it seems to me that the algorithm part of bipedal locomotion is well-understood. If you can do well on simulations, you should be able to do it well in the real world. You can model delay and noise, you can model servo's response curve. What I don't understand is then why is it still difficult to make a walking robot? Even a robot like the Big Dog is rare.
I have built a mobile robot with several ultrasonic sensors to detect obstacles and an infrared sensor to track a line as a path. I have written a simple algorithm to follow the line which works fine, but avoiding obstacles are a problem because the robot doesn't know the layout of the path, so even if it does move around the obstacle, it is not guaranteed that it will find the path line again(unless the line is perfectly straight). Therefore, I think I may need to use a path/motion planning algorithm or find a way to store the layout of the path so that robot could predict where to move and get back to the path line and keep on following after overcoming an obstacle. I would like to hear suggestions or types of algorithms I should focus on for this specific problem. Picture might help specifying the problem I'm facing. Thank you.
I am reading "Computer Vision: Models, Learning, and Inference" in which author writes at several points (like on page 428-429) that although matrix A seems to have 'n' degree of freedom but since it is ambiguous up to scale so it only has 'n-1' degree of freedom? Can anyone explain what this thing means? Why one degree of freedom is decreased?
I'm trying to build a robot that can be sent into rooms/buildings and detect people using nxt and/or Arduino. In addition to this I would like to be able to view what my robot is "seeing" in real-time on my PC as an infrared image. The sensors I've shortlisted for this are: Thermal Infrared NXT Sensor from Dexter industries - £44 RoBoard RM-G212 16X4 Thermal Array Sensor - £94 Omron D6T MEMS Thermal IR Sensor - £31 I believe the RoBoard and Omron sensors are capable of thermography, so I was wondering if anyone here has experience with these sensors and give me some advice. I was also thinking about using an idea from this project: www.robotc.net/blog/tag/dexter-industries. In this case I'd use the data read from the sensor to plot a graph showing different temperatures.
I am building a sumo-bot and our competitors have thin sticky tires, while we have wider and less sticky tires. The diameter is the same, and the gearbox/motor is the same. Who will win? PS: Sticky tires: https://www.pololu.com/product/694 & wide tires: https://www.pololu.com/product/62 Thanks!
I am trying to recharge my 12V lead acid battery with a 12V DC motor. I am using the battery to power the robot when it climbs. When it descends, I notice that I dont need to apply reverse voltage but the dc motor just backdrives instead. This can act as generator to recharge back the battery, am I right? I know that i need to step up the low voltage that is generated by the backdriven motor to 12V needed to recharge the battery. This is the board that I think can do the job: https://www.pololu.com/product/799 Is this all I need to make it work? With this method, should I be concerned about the 3 stages of battery charging: bulk, absorption and float? Please advise. Any feedbacks are greatly appreciated.
I have a small mobile robot with a LidarLite laser range finder attached to a servo. As of now I have the range finder side-sweeping in a 30 degree arc, taking continuous distance readings to the side of the robot (perpendicular to the robots forward motion). My goal is to have the robot drive roughly parallel to a wall, side-scanning the entire time, and create a 2D map of that wall it is moving past. The 2D topography map is created post processing (I use R for much of my data processing, but I don't know is popular for this kind of work). From what I know of it, SLAM sounds like a great tool for what I want to do. But I have two issues: 1: I know my robot will not have a consistent speed, and I have no way to predict or measure the speed of my robot. So I have no way to estimate the odometry of the robot. 2: The robot will also move further and closer to the wall as it proceeds down it's path. So I can not depend on a steady plane of travel from my robot. So given that I don't have any odometry data, and my realtive distance to the wall changes over the course of a run, is it possible to use SLAM to create 2D maps? I'm looking into stitching algorithms that are used for other applications, and some of these can handle the variances in relative distance, but I was hoping SLAM or some other algorithm could be of use here.
On many drones are already external magnetometers. Unfortunately, the orientation of such sensors is sometimes unknown. E.g. the sensor can be tilted 180° (pitch/roll) or X° in yaw. I was wondering, whether one could calculate the rotation of the sensor relative to the vehicle by application of the accelerometer and gyrometer? Theoretically, the accelerometer yields a vector down and can be used for calculation of the coordinate system. The discrepancy between magnetometer and gyrometer then, may be used for calculation of the correct orientation of the compass. Later the compass should be used for yaw calculation. Below is the starting orientation of both sensors (just an example, the orientation of the compass can be anything). Does someone know a good way to figure out the rotation of the compass?
I am stuck in adjusting the PID of my quadcopter, I cant adjust them on the fly because it just get out of control. I am adjusting them while attaching my quadcopter to something. Is this method correct. Will the pid values required will be different on the fly or same. Please suggest me how to attach my quad to some thing.
static void set_default_param(DPMTTICParam& param) { param.overlap = 0.4; param.threshold = -0.5; param.lambda = 10; param.num_cells = 8; }
When should you use multiple separate batteries vs a single battery with multiple UBECs? I'm trying to design the power system for a small 2-wheeled robot. Aside from the 2 main drive motors, it also has to power an Arduino, a Raspberry Pi and a couple small servos to actuate sensors. the motors are each rated for 6V with a peak stall current of 2.2A the Arduino uses about 5V@100mA the Raspberry Pi uses about 5V@700mA the servos each use 6V and have a peak stall current of 1.2A. So the theoretical max current draw would be 2.2*2+.1+.7+1.2*2 = 7.6A. Originally I was planning to use three separate Lipo batteries: one 12V using a step-down converter to power the main drive motors for [email protected] peak two 3.7V lipos each with step-up converter (rated for 5v@3A) to handle the servos and logic separately Then I discovered UBECs, which sound too good to be true, and they seem to be both cheap (<$10) and efficient (>90%) and able to handle my exact volt/current requirements. Should I instead use a single high-current 12V lipo with three UBECs to independently power my drive motors, sensor motors and logic? Or will this still suffer from brown-out and power irregularities if a motor draws too much current? What am I missing?
I'm working on project for the Autonomous vehicle, and i want to know what's confidence level means and how can we use confidence level for vehicle detection in OpenCV ?
For adjusting the pid for quadcopter, how much speed of motors are required before adjusting the pid. Do we need to give so much offset speed so that it cancels weight? I am sure we cant start adjusting pid with zero speed of motors initially.
Given a set of robot joint angles (i.e. 7DoF) $\textbf{q} = [q_1, ... , q_n]$ one can calculate the resulting end-effector pose (denoted as $\textbf{x}_\text{EEF}$), using the foward kinematic map. Let's consider the vice-versa problem now, calculating the possible joint configurations $\textbf{q}_i$ for a desired end-effector pose $\textbf{x}_\text{EEF,des}$. The inverse kinematics could potentially yield infinitely many solutions or (and here comes what I am interested in) no solution (meaning that the pose is not reachable for the robot). Is there a mathematical method to distinguish whether a pose in Cartesian space is reachable? (Maybe the rank of the Jacobian) Furthermore can we still find a reachability test in case we do have certain joint angle limitations?
In an autonomous mobile robot, we're planning on using digital servo motors to drive the wheels. Servo motors usually don't rotate continuously. However, they can be modified to do so based on many tutorials online which only mention modifying [analog] servo motors. My question is, can the same method(s) or any other ones be used to modify digital servo motors? Thanks
I am making a Robot goalie, the robot is supposed to detect whether a ball has been thrown in its direction , sense the direction of the ball and then stop it from entering the goal post. A webcam will be mounted on top of the goal post. The robot is required to only move horizontally (left or right), it shouldn't move forwards or backwards. The robot will have wheels, the image processing will be performed by raspberry pi which will then send the required information to a micro controller which will be responsible for moving the robot in the required direction(using servo motors). Which image processing algorithm will be the best to implement this scenario?
People at the RepRap 3d-printing project often mention CNC routers or CNC mills. Both kinds of machines almost always have a motorized spindle with stepper motors to move the spindle in the X, Y, and Z directions. What is the difference between a CNC router versus a CNC mill? (Is there a better place for this sort of question -- perhaps the Woodworking Stack Exchange?)
I'm designing my lawn mower robot, and I am in the perimeter stage. The electronic part is done, and works quite good, now comes the software. I need an advice on how to deal with the problem of line following. I mean, once the robot is on the line, parallel to the line, that's relatively easy. But how to manage the situation when the robot is driving around and approaches the line (wire)? I have two sensors, left and right, turned 45° with respect to the forward direction. The robot could arrive from any angle, so the signal amplitude read from the sensor could be completely random.. So I don't understand what to do in order to move it in the right position on the wire... What's the usual approach? The idea is the same as here: The wire is all around the yard. on the mower there are 2 sensors, left and right, that sense the signal emitted from the wire, a square wave signal at 34 KHz. The signal amplitude read from the sensors on the mower is about 2 V when it's above the wire.
I will have a 5 or 6 DOF arm build with Dynamixel or HerculeX smart servos. I need to move the gripper along Cartesian trajectory, which I will calculate in my C++ application. I looked at ROS, but the learning curve is pretty steep and it looks like a major overkill for this use case. I don't need a distributed system with all the complexity it brings. Preferably, I would like to call a standalone C++ library or libraries to get the arm actuated. What are my options? What will be the limitations of not using a full blown robotics framework like ROS or YARP in this case. EDIT Here is how I would like to code it: vector<Point> way_points; vector<Pose> way_poses; compute_Cartesian_trajectory(way_points, way_poses); // my code execute_Cartesian_trajectory(way_points, way_poses); // library call The last line can be spread over several library function calls and intermediate data structures, if needed. The end result should be the gripper physically following Cartesian trajectory given by way_points and way_poses.
I am trying to run my 600 series Roomba in a large, open space (1700+sf) and it does not recognize the large, open space and throws the Error 10 code. It does not recognize an edge of 2 12"-3" either; it will fall off the edge and become stuck. Any suggestions?
How do i determine which angle i can negate when gimbal lock occurs. As i've understood with gimbal lock that it remove one degree of freedom, but how do i determine which degree can be removed when a value R[1][3] of a rotation matrix (size 3x3) has the value 1. Is it the Roll, Pitch or yaw which can be taken out from the equation?
I bought this drone frame : q450 glass fiber quadcopter frame 450mm from http://hobbyking.com/hobbyking/store/__49725__Q450_V3_Glass_Fiber_Quadcopter_Frame_450mm_Integrated_PCB_Version.html I'm considering buying 4 AX-4005-650kv Brushless Quadcopter Motor's from http://hobbyking.com/hobbyking/store/__17922__AX_4005_650kv_Brushless_Quadcopter_Motor.html Will these motor's fit this frame ? How can I determine what motor's will fit the frame ?
I am running ROS Indigo on Ubuntu 14.04. I am doing a mono-camera calibration and trying to follow the camera calibration tutorial on the ROS Wiki. I give the following command: rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.108 image:=/my_camera/image camera:=/my_camera I get the following error: ImportError: numpy.core.multiarray failed to import Traceback (most recent call last): File "/opt/ros/indigo/lib/camera_calibration/cameracalibrator.py", line 47, in import cv2 ImportError: numpy.core.multiarray failed to import I thought it was to do with updating numpy and did a rosdep update but no difference. What is a possible way to solve this problem? UPDATE: I uninstalled and reinstalled ROS completely from scratch. I still get the same error. Should I have to look somewhere outside ROS?
Background: Introductory robotics competition for college freshmen; Bot has to open 8 jars (with two balls in each of them) in ten minutes and load the balls into a shooting mechanism. So, we were doing this project and we hit upon a challenge that the jar is not opening like we originally intended to. So we decided to get a rack-pinion mechanism and use it for unscrewing the lid. However, it is too large and we are unable to fit the bot in the required dimensions The actual question: Are there any wires or rigid columns/things which can contract ~1 cm when electricity is passed through it? And what would their price range be? Our budget is also in the list of constraints for the bot Edit: We can include a wire of length <1m or a column of length <30 cm. Also, the wire needs to contract only more than 7mm
I have implemented an EKF on a mobile robot (x,y,theta coordinates), but now I've a problem. When I detect a landmark, I would like to correct my estimate only on a defined direction. As an example, if my robot is travelling on the plane, and meets a landmark with orientation 0 degrees, I want to correct the position estimate only on a direction perpendicular to the landmark itself (i.e. 90 degrees). This is how I'm doing it for the position estimate: I update the x_posterior as in the normal case, and store it in x_temp. I calculate the error x_temp - x_prior. I project this error vector on the direction perpendicular to the landmark. I add this projected quantity to x_prior. This is working quite well, but how can I do the same for the covariance matrix? Basically, I want to shrink the covariance only on the direction perpendicular to the landmark. Thank you for your help.
The joint velocities are constant and equal to $\dot{\theta}_{2}$ = 1 and $\dot{\theta}_{1}$ = 1. How to Compute the velocity of the end-effector when $\theta_{2} =\pi/2$ and $\theta_{1} = \pi/6$
The question I am asking is that, what is the effect on stability of increasing or decreasing both the sample time and lagging of error signal to PID. Does it helps in stability or degrade it?
I saw one old industrial robot(Year 1988) end effector is having 2 DC motor for roll drive. After roll drive, yaw and pitch drives are connected and it has dc motors separately. But roll drive has two DC motors. Why are they used like this? why not single with higher torque. All the roll, pitch and yaw motors are same spec. Total 4 DC motors. Two DC motor connected to single shaft using gears in roll.
I am attempting to use the data Underwater Simulator (UWSim) provides through the ROS interface to simulate a number of sensors that will be running on a physical aquatic robot. One of the sensors detects the current depth of the robot so, I want to simulate this with the data provided by the UWSim simulated pressure sensor. The Problem is that nowhere in the UWSim wiki or source code can I find any reference to what units UWSim uses to measure pressure. That being said, what units does UWSim use to measure pressure? Additionally, I would appreciate general information about what units UWSim uses for the data provided by it's virtual sensors.
I am trying to find a control model for the system of a balancing robot. The purpose of this project is control $\theta_2$ by the 2 motors in the wheels i.e. through the torque $τ$ I started with the dynamic equations and went to find the transfer function. Then I will find the PID gains that will control the robot and keep it balanced with the most optimum response. For the time being I am only interested in finding the transfer function for the dynamic model only. Here is an example: https://www.youtube.com/watch?v=FDSh_N2yJZk However, I am not sure of my result.Here are the free body diagrams for the wheels and the inverted pendulum (robot body) and calculations below: Dynamic Equations: $$ \begin{array}{lcr} m_1 \ddot{x}_1 = F_r - F_{12} & \rightarrow & (1)& \\ m_2 \ddot{x}_2 = F_{12} & \rightarrow & (2) &\\ J_1 \ddot{\theta}_1 = F_r r - \tau & \rightarrow & (3) &\\ J_2 \ddot{\theta}_2 = \tau - mgl\theta & \rightarrow & (4) & \mbox{(linearized pendulum)}\\ \end{array} $$ Kinematics: $$ x_1 = r\theta_1 \\ x_2 = r\theta_1 + l\theta_2 \\ $$ Equating (1) and (3): $$ m_1 \ddot{x}_1 + F_{12} = F_r \\ \frac{J_1 \ddot{\theta}_1}{r} + \frac{\tau}{r} = F_r $$ Yields: $$ \frac{J_1 \ddot{\theta}_1}{r} - m_1 \ddot{x}_1 + \frac{\tau}{r} = F_{12} \rightarrow (5) $$ Equating (5) with (2): $$ \frac{J_1 \ddot{\theta}_1}{r} - m_1 \ddot{x}_1 + \frac{\tau}{r} - m_2 \ddot{x}_2 = 0 \rightarrow (6) \\ $$ Using Kinematic equations on (6): $$ (J_1 - m_1 r^2 - m_2 r^2) \ddot{\theta}_1 + m_2 l r \ddot{\theta}_2 = -\tau \rightarrow (7) \\ $$ Equating (7) with (4): $$ \begin{array}{ccc} \underbrace{(J_1 - m_1 r^2 - m_2 r^2) }\ddot{\theta}_1 &+& \underbrace{(m_2 l r + J_2 ) }\ddot{\theta}_2 &+& \underbrace{m_2 gl}\theta &= 0 \rightarrow (8) \\ A & &B & & C & \\ \end{array} $$ Using Laplace transform and finding the transfer function: $$ \frac{\theta_1}{\theta_2} = -\frac{Bs^2 + C}{As^2} \\ $$ Substituting transfer function into equation (7): $$ (J_1 - m_1 r^2 - m_2 r^2) \frac{\theta_1}{\theta_2}\theta_2 s^2 + m_2 lr\theta_2 s^2 = -\tau \\ $$ Yields: $$ \frac{θ_2}{τ} = \frac{-1}{(mlr-B) s^2+C} $$ Simplifying: $$ \frac{θ_2}{τ}= \frac{1}{J_2 s^2-m_2 gl} $$ Comments: -This only expresses the pendulum without the wheel i.e. dependent only on the pendulums properties. -Poles are real and does verify instability.
I recently bought a IMU . I am new at this. My question: Does the positioning of the IMU matter? Are there any differences between placing it at the center of the plate or if it is offset from the center? I am still learning about this topic. So any help would be greatly appreciated. Thanks.
I have a sensor that gives R, Theta, Phi (Range, Azimuth and Elevation) As such: https://i.stack.imgur.com/eVci6.jpg I need to predict the next state of the object given the roll, pitch yaw angular velocities given the above information. But the math is really confusing me. So far all I've gotten is this: Xvel = (R * AngularYVel * cos(Theta)) YVel = (R * AngularXVel * cos(Phi)) ZVel = (R * AngularYVel * -sin(Theta)) + (R * AngularXVel * -sin(Phi)) i worked this out by trigonometry, so far this seems to predict the pitching about the x axis and yawing about my y axis (sorry i have to use camera axis) But i dont know how to involve the roll (AngularZVel)
I am using the ar_track_alvar package in Indigo to detect AR Tags and determine their respective poses. I am able to run the tracker successfully as I can visualize the markers in RViz. I give the following command to print the pose values rostopic echo /ar_pose_marker and I get the following output indicating that the poses are determined. header: seq: 0 stamp: secs: 1444430928 nsecs: 28760322 frame_id: /head_camera id: 3 confidence: 0 pose: header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: '' pose: position: x: 0.196624979223 y: -0.238047436646 z: 1.16247606451 orientation: x: 0.970435431848 y: 0.00196992162831 z: -0.126455066154 w: -0.205573121457 Now I want to use these poses in another ROS node and hence I need to subscribe to the appropriate ROS message('ar_pose_marker"). But I am unable to get enough information on the web on the header files and functions to use in order to extract data from the published message. It would be great if somebody can point to a reference implementation or documentation on handling these messages. It might be useful to note that ar_track_alvar is just a ROS wrapper and hence people who have used ALVAR outside of ROSmay also give their inputs. UPDATE: I tried to write code for the above task as suggested by @Ben in the comments but I get an error. The code is as follows #include <ros/ros.h> #include <ar_track_alvar_msgs/AlvarMarker.h> #include <tf/tf.h> #include <tf/transform_datatypes.h> void printPose(const ar_track_alvar_msgs::AlvarMarker::ConstPtr& msg) { tf::Pose marker_pose_in_camera_; marker_pose_in_camera_.setOrigin(tf::Vector3(msg.pose.pose.position.x, msg.pose.pose.position.y, msg.pose.pose.position.z)); } int main(int argc, char **argv) { ros::init(argc, argv, "pose_subscriber"); ros::NodeHandle nh; ros::Subscriber pose_sub = nh.subscribe("ar_pose_marker", 1000, printPose); ros::spin(); return 0; } And I get the following error /home/karthik/ws_ros/src/auto_land/src/pose_subscriber.cpp: In function ‘void printPose(const ConstPtr&)’: /home/karthik/ws_ros/src/auto_land/src/pose_subscriber.cpp:17:53: error: ‘const ConstPtr’ has no member named ‘pose’ marker_pose_in_camera_.setOrigin(tf::Vector3(msg.pose.pose)); ^ make[2]: *** [auto_land/CMakeFiles/pose_subscriber.dir/src/pose_subscriber.cpp.o] Error 1 make[1]: *** [auto_land/CMakeFiles/pose_subscriber.dir/all] Error 2 make: *** [all] Error 2 Any suggestions?
I am doing research on autonomous car and looking for a sensor to be used along with LiDAR laser scanner. Ladybug could be a very good option but the cost!! too expensive. Could you please suggest me options for camera sensors with good FOV and which will cost me around $1000. Thank you so much!! -CHIANG CHEN
I'm working on a project that requires me to build a small vehicle (footprint of ~ 14 x 14 inches, less than 6.5 pounds) that can traverse sand. For the steering system, I was thinking of replicating the way tanks and lawn mowers navigate (ability to do zero-point turns), but I want to do this with four wheels instead of tracks like a tank. I need help with implementing this idea. My preliminary thoughts are to have two motors where each motor power the wheels on one side of the vehicle (I think this would require a gearing system) or to have a motor to power each individual wheel which I'd rather avoid.
I want to replace the flight module with smart phone because it has all sensors that are required, like gyroscope, magnetometer, etc. Is that possible? I am using an Google Nexus 4 Android (OS model 5.1). I will control using another mobile, I am able write an app, with an Arduino acting as a bridge between smartphone and copter. I am using flight controller OpenPilot CC3D CopterControl.
In the past I built some simple robot arms at home, using RC servo motors or stepper motors (till 3dof). I would like to build a new arm with 4dof or 5dof with the steppers. Until now I used Arduino and A4988 stepper drivers and Gcode. For calculating inverse kinematics in real time for a 4dof or 5dof I think the Arduino is not enough powerful. So I'm searching for a new tool-chain Gcode Interpreter + inverse kinematics calculation + stepper controller. I see LinuxCNC + beaglebone black + cnc cape. Not too expensive for an hobbyist. But this is the only possibility I found. There are other possibilities for an hobbyist to implement a 4dof or 5dof robot arm working with the stepper motors?
my quadcopter's settling time is very large, that is it sets its setpoint in very large amount of time, during which it has covered a large distance. But at settle point, when i gives it a jerk or push its returns to settle in normal duration. doesnt over shoots(little). The problem is with the settling time that is when i move the stick front or back it takes huge amount of time. what could be wrong. i have tried giving more P value and I value to PID but then it overshoots and get unstable. This is my PID program. the PID values are given. I read 6 channels from remote using the command pulsein(). which i guess is taking upto 20ms per command. kp = 1.32; ki= 0.025; kd= 0.307; void PID() { error = atan2(lx,ly); error *= 1260/22; error = setpoint1 - error; now = millis(); dt = now - ptime; ptime = now; dt /= 1000; integ = integ + (error * dt); der = (error - prerror) / dt; pidy = (kp * error); pidy += (ki * integ); pidy += (kd * der); //Serial.println(error); prerror = error; } pidy is added and subtracted to esc speeds respectively.
I'm interested in building a quadcopter. The result I'd like to obtain is an autonomous drone. I'd be interested in a GPS to allow it to remain stationary in the air, and also to fly through checkpoints. Can this be done with a flight controller, or does it need to be programmed? I'm not too sure about what flight controllers really are. Could someone offer any materials to help me get towards this goal. Thanks, Jacob
I would like to make a Cartesian robot with maximum speed of up to $1ms^{-1}$ in x/y plane, acceleration $2ms^{-2}$ and accuracy at least 0.1mm. Expected loads: 3kg on Y axis, 4kg on X axis. Expected reliability: 5000 work hours. From what I have seen in 3D printers, belt drive seems not precise enough (too much backlash), while screw drive is rather too slow. What other types of linear actuators are available? What is used in commercial grade robots, i.e. http://www.janomeie.com/products/desktop_robot/jr-v2000_series/index.html
This is a very basic beginner question, I know, but I am having trouble connecting to the Hokuyo UST-10LX sensor and haven't really found much in terms of helpful documentation online. I tried connecting the Hokuyo UST-10LX directly to the ethernet port of a Lubuntu 15.04 machine. The default settings of the Hokuyo UST-10LX are apparently: ip addr: 192.168.0.10 netmask: 255.255.255.0 gateway: 192.168.0.1 So, I tried going to the network manager and setting IPv4 settings manually, to have the ip addr be 192.168.0.9, netmask of 255.255.255.0, and gateway to 192.168.0.1. I also have a route set up to the settings of the scanner. I then go into the terminal and run: rosrun urg_node urg_node _ip_address:=192.168.0.10 and get this output: [ERROR] [1444754011.353035050]: [setParam] Failed to contact master at [localhost:11311]. Retrying... How might I fix this? I figure it's just a simple misunderstanding on my end, but through all my searching I couldn't find anything to get me up and running :( Thank you for the help! :) EDIT: HighVoltage pointed out to me that I wasn't running roscore which was indeed the case. I was actually running into problems before that when I still had roscore up, and when I tried it again, this was the output of the rosrun command: [ERROR] [1444828808.364581810]: Error connecting to Hokuyo: Could not open network Hokuyo: 192.168.0.10:10940 could not open ethernet port. Thanks again!
I have a dataset that contains position information from tracking a robot in the environment. The position data comes both from a very accurate optical tracking system (Vicon or similar) and an IMU. I need to compare both position data (either integrating the IMU or differentiating the optical tracking data). The main problem is that both systems have different reference frames, so in order to compare I first need to align both reference frames. I have found several solutions; the general problem of aligning two datasets seems to be called "the absolute orientation problem". My concern is that if I use any of these methods I will get the rotation and translation that aligns both datasets minimizing the error over the whole dataset, which means that it will also compensate up to some extent for the IMU's drift. But I am especially interested in getting a feeling of how much the IMU drifts, so that solution does not seem to be applicable. Anyone has any pointer on how to solve the absolute orientation problem when you do not want to correct for the drift? Thanks
Let us assume I have an object O with axis $x_{O}$, $y_{O}$, $z_{O}$, with different orientation from the global frame S with $x_{S}$, $y_{S}$, $z_{S}$ (I don't care about the position). Now I know the 3 instantaneous angular velocities of the object O with respect to the same O frame, that is $\omega_O^O = [\omega_{Ox}^O \omega_{Oy}^O \omega_{Oz}^O]$. How can I obtain this angular velocity with respect to the global frame (that is $\omega_O^S$)? Thank you!
Let us assume we have a gyro that is perfectly aligned to a global frame ($X,Y,Z$). From what I know the gyro data give me the angular rate with respect to the gyro axis ($x,y,z$). So let's say I got $\omega_x,\omega_y,\omega_z$. Since I know that the 2 frames are perfectly aligned I perform the following operations: $\theta_X = dt * \omega_x$ $\theta_Y = dt * \omega_y$ $\theta_Z = dt * \omega_z$ where $\theta_X$ is the rotation angle around $X$ and so on. My question is: what is this update like in the following steps? Because this time the measurement that I get are no more directly related to the global frame (rotated with respect to the gyro frame). Thank you!
I'm looking for a "good" algorithm/model for wheeled odometry estimation. We have encoders on the two back wheels of the tricycle robot, and IMU on the controller board. Currently we use MEMS gyro for angular velocity estimation and encoders for linear velocity, then we integrate them to get the pose. But it's hard to calibrate gyro properly and it drifts (due to temperature or just imperfect initial calibration). How can we improve the pose estimation? Should we consider model that incorporates both encoders and gyro for heading estimation? Model slippage, sensor noise? Is there some nice standard model? Or should we just use more/better gyro? Not considering the visual odometry.
To avoid wasting your time on this question, you might only want to react on this if you have knowledge of industrial robotic arms specific. Common troubleshooting is unlikely to fix this problem or could take too much time. We've started a project with the Mitsubishi Melfa RV-2AJ robotic arm. Everything went fine until the moment we replaced the batteries. The controller displays: "FAiL" and does not respond to any buttons or commands sent through serial connection. We did replace the batteries of both the robot and the controller. As it took some time to get the batteries delivered, we've left the robot (and controller) withouth power for the weekend. (Which might have caused this problem) Is there anyone with knowledge of Mitsubishi Robotic arms around here? I'm kinda hoping it would be a common problem/mistake and anyone with experience on this subject would know about it?
I am working on a project where I want to run some computer vision algorithms (e.g. face recognition) on the live video stream coming from a flying drone. There are many commercial drones out there that offer video streams, like http://www.flyzano.com/mens/ https://www.lily.camera/ etc.. But none of them seem to give access to the video feed for real-time processing. Another idea is to have the drone carry a smartphone, and do the processing on the phone through the phone's camera. Or just use a digital camera and an arduino that are attached to the drone. Although these ideas are feasible, I would rather access the video-feed of the drone itself. So my question is that are there any drones out there that offer this feature? or can be hacked somehow to achieve this?
I am planning to build a homemade ROV, and I wanted to know a couple of things about the motors. First is: Will it be Ok, if I use a brushed DC motor, instead of a brushless motor, and is there any major disadvantages ? Second : What RPM DC motor should I aim for ? High RPM or low RPM ? Will 600rpm be enough ? The specific motor that I am talking about is http://www.ebay.ca/itm/37mm-12V-DC-600RPM-Replacement-Torque-Gear-Box-Motor-New-/320984491847?hash=item4abc2aa747:m:mEBEQXXpqmNg4-vxmFaZP5w Will this be a good motor for the propellers of the ROV. I am planning to have 4 motors / propellers. Two for upward and downward thrusting, and 2 for forward and side thrusting. The propellers that I plan to use, are basic plastic 3 blade propellers, with diameter, between 40mm and 50mm. My main question is, what RPM and torque should I aim for when choosing the DC motor ?
I do have a robotic application, where a 7Dof robot arm is mounted on a omnidirectional mobile platform. My overall goal is to get MoveIt! to calculate a sequence of joint movements, such that the robot EEF reaches a desired goal in Cartesian space. In order to combine a robot platform with a world, the MoveIt! setup assistant lets you assign virtual joints between the "footprint" of the platform and the world it is placed in. I do have two strategies. Either select a planar joint as a virtual joint. (What are the degrees of freedom or respectively the joint information that I can gather from this joint) or select a fixed joint and add a (prismatic-x -> prismatic-y -> revolute-z) chain to the robot model. Are there any significant differences (advantages/ disadvantages) to either of the approaches?
I'm using an accelerometer and gyroscope to detect the angle and tilt rate on my two-wheeled cart-pole robot. Is there an optimal height to place the sensors? Should I place them closer to the bottom (near the wheels), the middle (near the center of mass), or the top? Justification for the optimal choice would be appreciated.
I'm working on modeling and simulation of robotic arm, after I obtained the mathematical model of the robot, I used that to implement some control techniques, to control the motion of the robot. The dimensions and masses of each links are taken from available kit, basically, it's RA02 robot with servo at each joint. After the modeling, different parameters, can be plotted: like the joint angles\speeds\torques ... etc. The point now is that, the value obtained for the joint toque is much more higher that the torque limit of the servo, does it mean my design\modeling is not realizable? Is it necessarily to get close (torque) value for servo's torque? Any suggestion?
The rest of my student team and I are in the process of redesigning an exoskeleton and building it based on an existing one. From the papers that we have been reading there are some references to low, high and zero impedance torque bandwith. What is that? Does it have to do with the control system? It is measured in Hz. Here is a table from one of the papers:
I found not so much literature to the topic, this is why I ask here. Does someone know some ways to estimate the drift rate of the gyrometer. I was thinking about basically two approaches. One would be to use a low pass filter with a low cut-off frequency to estimate the drift of the angular velocity. Second would be to use the accelerometer, calculate the attitude dcm and by this also the angular velocity. The difference between the acc angular velocity and gyrometer would be maybe also a drift rate. Nevertheless, I am not so sure whether this is a good way to get reliable drift rates :D
Is it possible to set up communication between an Arduino Uno and an Android phone using a wire that directly connects the Android phone and the Arduino?
I have a 1inch square tube that I would like to place a motor into. The motor I have takes up approximately 1/2 of the available space (roughly 3/4 inch I.D.) I would like to find the largest motor that will fit in the space without having to cobble too much of a housing. Where/how can i find motors by physical dimensions?
I have a Crock Pot with an analog knob and would like to find a way to turn the knob by using and appliance timer. I have no idea where to begin. I need help. Thanks
Currently I am developing a control system for an aircraft of a unique design (something in between a helicopter and a dirigible). At this moment I can model only the dynamics of this vehicle without any aerodynamic effects taken into account. For this I use the following work-flow: Mechanical model in SolidWorks -> MSC ADAMS (Dynamics) <--> MATLAB/Simulink (Control algorithms) Thus, the dynamics of the vehicle is modeled in ADAMS and all control algorithms are in MATLAB/Simulink. Unfortunately, ADAMS can not simulate any aerodynamic effects. As a result, I can not design a control system that is capable to fight even small wind disturbances.
I know how to make a line follower. But in this video what have they done exactly? They are giving the source and destination in the map but how the robot moves based on the instruction given in map? What is the procedure to do it? They have mapped the path. Please do watch the video.
I want a mobile robot to go from a starting position to a goal position. But, I don't want to calculate the pose from encoders. Instead I want to know if there exist such a simulator that provides pose function that makes the work easier, like go_to(x_coordinate, y_coordinate). That means, the robot will automatically calculate its current position and leads itself to the goal position.
I am planning to use MATLAB and Gazebo for one of my course projects. However all the tutorials I have seen till now use Gazebo by using a virtual machine which has ROS and Gazebo installed. I have already installed ROS and Gazebo on this machine (OS Ubuntu). I also have MATLAB installed on it. Is it possible to use the Gazebo on this machine itself with the MATLAB toolbox?
What are the main differences between motion planning and path planning? Imagine that the objective of the algorithm is to find a path between the humanoid soccer playing robot and the ball which should be as short as possible and yet satisfying the specified safety in the path in terms of the distance from the obstacles. Which is the better terminology? motion planning or path planning?
I am currently planning on building a robotic arm. The arm's specs are as follows: 3 'arms' with two servos each (to move the next arm) single servo clamp mounted on revolving turntable turntable rotated by stepper motor turntable mounted on baseplate by ball bearings to allow rotation baseplate mounted on caterpillar track chassis baseplate is smaller in length and width than caterpillar chassis What are the required formulas in determining how much torque each servo must produce, keeping in mind that the arm must be able to lift weights of up to 1 kilogram? Also, considering that the ball bearings will take the load of the arm, how strong does the stepper have to be (just formulas, no answers)? As far as overall dimensions are concerned, the entire assembly will be roughly 255mm x 205mm x 205mm (l x w x h). I have not finalized arm length, but the aforementioned dimensions give a general estimate as to the size.
As someone who is new and is still learning about robotics, I hope you can help me out. Let's say I have two systems: (a) Inverted Pendulum (unstable system) (b) Pole Climbing Robot (stable system) For system (a), I would say that generally, it is a more dynamic system that produces fast motion. So, in order to effectively control it, I would have to derive the Equations of Motions (EOM) and only then I can supply the sufficient input to achieve the desired output. Eventually, the program will implement the EOM which enables the microcontroller to produce the right signal to get the desired output. However for system (b), I assume that it is a stable system. Instead of deriving the EOM, why cant I just rely on the sensor to determine whether the output produced is exactly what I want to achieve? For unstable system, controlling it is just difficult and moreover, it does not tolerate erratic behavior well. The system will get damaged, as a consequence. On the contrary, stable system is more tolerant towards unpredictable behavior since it is in fact stable. Am I right to think about it from this perspective? What exactly is the need for deriving the EOM of systems (a) and (b) above? What are the advantages? How does it affect the programming of such systems? Edited: Some examples of the climbing robot that I'm talking about: i.ytimg.com/vi/gf7hIBl5M2U/hqdefault.jpg ece.ubc.ca/~baghani/Academics/Project_Photos/UTPCR.jpg
I would like to filter angular velocity data from a "cheap" gyroscope (60$). These values are used as an input of a nonlinear controller in a quadcopter application. I am not interested in removing the bias from the readings. Edit: I'm using a l2g4200d gyroscope connected via i2c with an Arduino uno. The following samples are acquired with the arduino, sent via serial and plotted using matlab. When the sensor is steady, the plot shows several undesired spikes. How can I filter these spikes? 1st approach: Spikes are attenuated but still present... Let's consider the following samples in which a couple of fast rotations are performed. Let's assume that the frequency components of the "fast movement" are the ones I will deal with in the final application. Below, the discrete Fourier transform of the signal in a normalized frequency scale and the second order ButterWorth low pass filter. With this filter, the main components of the signal are preserved. Although the undesired spikes are attenuated by a factor of three the plot shows a slight phase shift... And the spikes are still present. How can I improve this result? Thanks. EDIT 2: 1./2. I am using a breakout board from Sparkfun. You can find the circuit with the Arduino and the gyro in this post: Can you roll with a L3G4200D gyroscope, Arduino and Matlab? I have added pullup resistors to the circuit. I would exclude this option because other sensors are connected via the i2c interface and they are working correctly. I haven't any decoupling capacitors installed near the integrated circuit of the gyro. The breakout board I'm using has them (0.1 uF). Please check the left side of the schematic below, maybe I am wrong. Motors have a separate circuit and I have soldered all the components on a protoboard. The gyro is in the quadcopter body but during the test the motors were turned off. That is interesting. The sampling frequency used in the test was 200Hz. Increasing the update freq from 200 to 400 hz doubled the glitching. I found other comments on the web about the same breakout board and topic. Open the comments at the bottom of the page and Ctrl-F virtual1
First off, sorry if my question is too naive or not related to the forum (this is the best matching one I've found on StackExchange). I have some amount of SIM-cards. I can programmatically access a single SIM-card if it is inserted into a USB-modem. I want to be able to access the specified card in the set. The best way to achieve this I can think of is to create a device that would somehow replace the current card in the modem with one in the set. I can not use several modems for this because I don't really know the amount of cards and I would like to automate this process anyway. I am more of a programmer than an engineer so everything that follows (including the entire concept of switching cards) looks pretty weird to me. There probably is a better solution, but this is the best I've come up with. For now I consider building some sort of conveyor that would move cards and insert the ones I need with some sort of a feed device. This looks like an overkill to me that would be both expensive to build and uneffective to work with. I want an idea of a device that would replace SIM-cards into the modem (or maybe a better solution to the problem). Any disassembly of a modem needed is possible. This is required to automate receiving SMS from clients that have different contact phones. Unfortunately, a simple redirection of SMS is not an option.
I have the following system here: https://i.stack.imgur.com/DKIDk.jpg Basically, I have a range finder which gives me $R_s$ in this 2D model. I also have the model rotate about the Centre of Mass, where I have angular values and velocities Beta ($\beta$) and BetaDot ($\dot{\beta}$). I can't see, for the life of me, how to figure the formula for the angular velocity in the Range Finder frame. How am I supposed to do this? I have all the values listed in those variables. The object there doesn't move when the vehicle/system pitches. It's stationary.
I try to measure Euler angles from an IMU, but some discontinuities happens during measurement, even in vibrationless environment, as shown in the images below. Can someone explain which type of filter will be the best choice to filter this type discontinuities?
I am a beginner at robotics. I recently stumbled across this robotic clock on youtube. I am an electrical engineering student and am interested in submitting it as my minor project. I have studied the basics on forward and inverse kinematics, Greubler's Equation, four bar linkage but this robot seems to be a 5 bar linkage. I want to know how to implement it in a 5 bar linkage. How to use the inverse kinematics solutions described in Combined synthesis of five-bar linkages and non-circular gears for precise path generation, to make the robot follow desired trajectory? I have been stuck at this for days... any sort of help would be appreciated.
I'm given an assignment in which I have to design a full state feedback controller by pole placement. The state space system is fully controllable and I've been using Matlab/Simulink to determine the required feedback gain K using the place() command for several sets of poles, however once I use poles that are "too negative", for example p=[-100,-200,-300,-400,-500], my controlled system starts showing bounded oscillatory behaviour. Is it possible that too negative poles can cause marginal stability? And if so, why? I've read that this is only possible when the real part of one or more poles equals 0, which certainly isn't the case here.
I am trying to establish the FRI connection for KUKA LBR iiwa. I know how to configure the FRI connection as there are example programs available in the Sunrise.Workbench. A sample code is given below. My question is 'how to pass' the joint torque values (or joint position or wrench) to the controller using 'torqueOverlay' as mentioned in the code below. Since I could not find any documentation on this, it was quite difficult to figure out. Any sample code with explanation or any clues would be more than helpful. JAVA code: package com.kuka.connectivity.fri.example; import static com.kuka.roboticsAPI.motionModel.BasicMotions.ptp; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import com.kuka.connectivity.fri.ClientCommandMode; import com.kuka.connectivity.fri.FRIConfiguration; import com.kuka.connectivity.fri.FRIJointOverlay; import com.kuka.connectivity.fri.FRISession; import com.kuka.roboticsAPI.applicationModel.RoboticsAPIApplication; import com.kuka.roboticsAPI.controllerModel.Controller; import com.kuka.roboticsAPI.deviceModel.LBR; import com.kuka.roboticsAPI.motionModel.PositionHold; import com.kuka.roboticsAPI.motionModel.controlModeModel.JointImpedanceControlMode; /** * Moves the LBR in a start position, creates an FRI-Session and executes a * PositionHold motion with FRI overlay. During this motion joint angles and * joint torques can be additionally commanded via FRI. */ public class LBRTorqueSineOverlay extends RoboticsAPIApplication { private Controller _lbrController; private LBR _lbr; private String _clientName; @Override public void initialize() { _lbrController = (Controller) getContext().getControllers().toArray()[0]; _lbr = (LBR) _lbrController.getDevices().toArray()[0]; // ********************************************************************** // *** change next line to the FRIClient's IP address *** // ********************************************************************** _clientName = "127.0.0.1"; } @Override public void run() { // configure and start FRI session FRIConfiguration friConfiguration = FRIConfiguration.createRemoteConfiguration(_lbr, _clientName); // for torque mode, there has to be a command value at least all 5ms friConfiguration.setSendPeriodMilliSec(5); friConfiguration.setReceiveMultiplier(1); getLogger().info("Creating FRI connection to " + friConfiguration.getHostName()); getLogger().info("SendPeriod: " + friConfiguration.getSendPeriodMilliSec() + "ms |" + " ReceiveMultiplier: " + friConfiguration.getReceiveMultiplier()); FRISession friSession = new FRISession(friConfiguration); FRIJointOverlay torqueOverlay = new FRIJointOverlay(friSession, ClientCommandMode.TORQUE); // wait until FRI session is ready to switch to command mode try { friSession.await(10, TimeUnit.SECONDS); } catch (final TimeoutException e) { getLogger().error(e.getLocalizedMessage()); friSession.close(); return; } getLogger().info("FRI connection established."); // move to start pose _lbr.move(ptp(Math.toRadians(90), Math.toRadians(-60), .0, Math.toRadians(60), .0, Math.toRadians(-60), .0)); // start PositionHold with overlay JointImpedanceControlMode ctrMode = new JointImpedanceControlMode(200, 200, 200, 200, 200, 200, 200); PositionHold posHold = new PositionHold(ctrMode, 20, TimeUnit.SECONDS); _lbr.move(posHold.addMotionOverlay(torqueOverlay)); // done friSession.close(); } /** * main. * * @param args * args */ public static void main(final String[] args) { final LBRTorqueSineOverlay app = new LBRTorqueSineOverlay(); app.runApplication(); } }
I am trying to Calculate the thrust my 4 quadcopter motors will have. I am not sure how to do it. Here are the parts I am Using 4S 6600mAh 14.8V Lipo Pack 15x5.5 Prop 274KV motor max output is 28A ESC 35 Amp Thank You
When I send several commands in a row some don't get executed. For example I have a script which starts the roomba driving in a circle and plays the john cena theme song through its speakers but sometimes it will only play the music and not drive. I have noticed that in all the guides there are pauses after every command. Is there any documentation which describes when pauses are needed?
I have a STM32F072RB Nucleo Board which has a 64Pin Microcontroller. For my application I chose the sTM32F103RG which has a bigger RAM size and Flash size too. Can i Remove an F072R from a Nucleo board put a F103R on top of it? I am testing my code with a F103C, but the flash and ram size is not meeting my requirement. I have a F072R Nucleo Board lying around so for a quick developmental test could I swap it for the 103R ? The R series is Pin Compatible! Anyone Has done microcontroller swapping before?
I am currently interested in SCARA arm designs and I have few, beginner questions for which I didn't find answers yet. 1/ While comparing professional arms (made by epson, staubli...) I noticed that the actuator used for the translation on the Z axis is at the end of the arm. On "hobby" arms like the makerarm project on kickstarter they use a leadscrew with the actuator at the beginning of the arm. I thought it was smarter to put the actuator handling this DoF at the begining of the arm (because of its weight) and not at the end, but I assume that these companies have more experience than the company behind the makerarm. So I'm probably wrong, but I would like to understand why :) 2/ Also I would like to understand what kind of actuators are used in these arms. The flx.arm (also a kickstarter project) seems to be using stepper motors but they also say they are using closed loop control, so they added an encoder on the stepper motors right? Wouldn't it be better to not use stepper and, for instance, use DC brushless motors or servos instead ? 3/ I also saw some of these arms using belts for the 2nd Z axis rotation, what is the advantage ? it only allows to put the actuator at the begining of the arm ?
I'm trying to make a quadcopter with Arduino. I already have the angles (roll pitch and yaw) thanks to an IMU. They are in degrees and filtered with a complementary filter. I want to apply a PID algorithm for each axis but I dont know if the inputs should be angles (degrees) or angular velocities in degrees per second so as to calculate the errors with respect referencies. Which will be the difference? Which will be the best way?  Finally, another question about a PID code: I have seen that many people don't include time in their codes. For example, their derivative term is kd×(last error-actual error) instead kd×(last error-actual error)/looptime and something similar with the integrative term. Which is the difference? Thank you in advanced.
I am using arduino and L298n motor driving IC to drive 4 12V dc motors (150rpm). Also I am using 11.1V LiPo battery (3cell, 3300 mAh, 20C).I have connected two PWM pins of L298n to digital HIGH from arduino.Battery positive terminal is connected to the 12V input of IC.Battey negative terminal and arduino ground is connected to the ground input of IC.Also a 5V input is given from arduino to IC and ground from arduino is connected to other gnd pin which is adjacent to INT3 pin.Motor1 pins from L298n are connected with two motors (connected parallely on right side of bot) and Motor2 pins are connected with other two motors (connected parallely on left side of bot).Appropriate inputs are given to INT1,INT2,INT3,INT4 to drive the bot in forward direction.But the bot is moving too slowly.The voltage measured across motor1 pins is only 5V.I have connected the battery directly to the motors,then it is running very fast.How to run it fast.Please help.....
I am participating in a robotics competition. I am supposed to design and build two robots. Out of these, one cannot have a driving actuator (it can have a steering actuator though, fed by a line following circuit). The other is supposed to drive the non-driving robot through an obstacle course, without touching it. This is kind of driving me crazy since at one point the separation between the two robots is 60 cm (23 inches). Ways I've considered: Wind Energy (wont work, need huge sails) Magnetic Repulsion of some sort Now repulsion I've spent a lot of time studying. My solution was to use strong permanent magnets on the non-driving robot (Neodymium,N52) and electromagnets on the driving robot. But, after doing a huge load of calculations came to the conclusion that not enough force can be transmitted over the distance as magnetic fields fall off too quick. Rulebook: http://ultimatist.com/video/Rulebook2016_Final_website_1_Sep_15.zip I am really looking for even a pointer here. Is there a trick somewhere that I am missing?
I have a 6 DOF arm whose velocities I'm controlling as a function of force applied to the end effector. The software for the robot allows me to input either the desired end effector velocity or the desired joint angular velocities, which I know can be found using the inverse Jacobian. Are there any benefits of using one scheme over the other? Would, for example, one help avoid singularities better? Does one lead to more accurate control than the other does?
This might be a dumb question. I have started to play with this robot with raspberry pi two days ago. I did some simple stuff, like- move around and sensor reading etc. But since yesterday night, It seems like I cannot send any command. The built in clean, dock functions are working perfectly but I cannot do anything using the same python code that I already used before. Its behaving like nothing is going through the Rx. Can you suggest what might go wrong? Thanks