instruction
stringlengths
40
28.9k
I am working on estimating a robots pose using Odometry and GPS. My first problem is that all kinematic model i have seen for a differential drive robot proposes using the displacement of the left and right wheels to evaluate the robots next pose. However, in my situation the robot i have only spits out current X and Y pose relative to the starting point of the movement. can i use this as my state estimate P = [x,y]T P = [x0,y0] + [dx,dy] where dx and dy are change in respective coordinates gotten from the robots odometry. if the above is posible how do i calculate the state covariance Q of the filter. For GPS, how do i evaluate the covariance R; i have tried to collect multiple reading of latitude and longitude from a fixed point but i dont know if this is righ and i just dont get evaluate the covariance from these data (feeling dumb). Thank you in anticipation.
I'm a bit at my wits' end here - I'm trying to build a tilt compensated compass for my autonomous sailboat (ardusailor!). I'm using an InvenSense MPU9150. Originally, I used the built-in fusion support on the sensor to get a quaternion, pull the yaw/pitch/roll angles from that, and then use this formula to do the tilt compensation: float heading = atan2(-(mz * s_phi - my * c_phi), mx * c_theta + my * s_theta * s_phi + mz * s_theta * c_phi); where the various s_angle is sin(angle) and c_angle is cos(angle). That didn't work. I tried using a vector-based approach stolen from here. That didn't work. Then, I took away the tilt compensation, and just did an uncompensated atan2(Yh,Xh), and that produced very strange result as well. Basically, as I rotate the sensor about the z axis, the value rotates between 70 and -10 degrees, completing a full circle (i.e. as i make a 360 degree rotation, it starts at 70, gets to -10, and then back up to 70). 70 is at about 0* magnetic, 10 is at about 180, 0 is at about 70-80. I see the same behavior from an HMC5883L magnetometer chip as well. The thing is, looking at raw values, I get magnetic values that seem fine, and hard and soft iron offsets are in place: top row is corrected for offsets (using an ellipsoid fit method), bottom is raw. The numbers may look skewed, but they aren't - the scales aren't all the same. Graphs are, in order, x:y, y:z, x:z What could this be?
I'm facing a real weird problem with EKF Localization. The filer gives me wrong error every time the robot is in parallel with a landmark. I've debugged the code many times but failed to solve the problem however I found out where is exactly the problem occurs. The following picture shows the scenario. The robot moves in a circular motion. There are four landmarks. I have indicted in the picture where the filer gives me wrong angle for the estimated state. As you see, when the robot is in parallel with all landmarks, I got a wrong angle for the estimated robot's pose. This is another picture shows how the estimated angle is wrong where the red circle is the estimated robot's pose and the blue one is the actual robot's pose. I did also track the problem numerically. What I found out is that the estimated measurement of landmark # 4 is in the opposite direction of the actual measurement of landmark # 4. i = 1 <---- landmark 1 <200,0> est_robot = 6.4545 21.1119 0.1246 Zobs = 194.9271 -0.2208 1.0000 Zpre = 194.6936 -0.2333 1.0000 real_robot = 6.2069 20.9946 0.1188 Mubar = 6.2844 21.7029 0.1201 i = 2 <---- landmark 2 <200,200> est_robot = 6.2844 21.7029 0.1201 Zobs = 263.8102 0.5982 2.0000 Zpre = 263.2785 0.6239 2.0000 real_robot = 6.2069 20.9946 0.1188 est_robot = 6.2901 21.0100 0.0155 i = 3 <---- landmark 3 <-200,200> est_robot = 6.2901 21.0100 0.0155 Zobs = 273.0734 2.2991 3.0000 Zpre = 273.1173 2.4114 3.0000 real_robot = 6.2069 20.9946 0.1188 est_robot = 6.2840 21.0462 0.0259 i = 4 <---- landmark 4 <-200,0> est_robot = 6.2840 21.0462 0.0259 Zobs = 207.2696 3.1272 <--- the actual measurement of landmark 4 4.0000 Zpre = 207.3548 -3.0658 <--- this is the problem. (it should be 3.0658) 4.0000 real_robot = 6.2069 20.9946 0.1188 est_robot = 6.0210 20.8238 -0.5621 and this is how I computed the angles. For the actual measurements, Zobs = [ sqrt((map(i,1) - real_robot(1))^2 + (map(i,2) - real_robot(2))^2) ; atan2(map(i,2) - real_robot(2), map(i,1) - real_robot(1)) - real_robot(3); i]; % add Gaussian noise Zobs(1) = Zobs(1) + sigma_r*randn(); Zobs(2) = Zobs(2) + sigma_phi*randn(); Zobs(3) = i; Zobs(2) = mod(Zobs(2), 2*pi); if (Zobs(2) > pi) % was positive Zobs(2) = Zobs(2) - 2*pi; elseif (Zobs(2) <= -pi) % was negative Zobs(2) = Zobs(2) + 2*pi; end For the predicted measurements q = (map(i,1) - est_robot(1))^2 + (map(i, 2) - est_robot(2))^2; Zpre = [ sqrt(q); atan2(map(i,2) - est_robot(2), map(i,1) - est_robot(1)) - est_robot(3); i]; if (Zpre(2) > pi) % was positive Zpre(2) = Zpre(2) - 2*pi; elseif (Zpre(2) <= -pi) % was negative Zpre(2) = Zpre(2) + 2*pi; end
Firstly I'm unsure whether this question belongs here or on another SE site (but I'll wing it for now). I've recently been given the job of connecting up a 'smart camera' to a setup where a robotic arm will pick and place objects from point A to point B. The real application for the camera is to check if the objects are out of alignment to their supposed positions. However I am curious to see if there is any way I can calculate the distance of an object given that I already know the objects actual size. Naturally the camera will see the object as bigger when closer and smaller when farther away but how can I turn this information into depth/distance from the camera? I have not yet started using the camera. For now it is just an idea. I will assume that I can calculate what percentage of the view frame is taken up by the object. For example if I have an object of uniform shape, I know that from dist1 it takes up 75% of the view frame and from dist2 it takes up 45% of the view frame. Should this prove to be possible I imagine that it could have a number of different applications. /Anyway any feedback is appreciated. Thanks! ( :
I am very new two robotics, however I have a working stereo algorithm which I want to combine with a SLAM algorithm. I am developing this system for an other application but I decided integrating it on a robot first and testing it might be a good way to get used to the system and test its behaviour in a realistic environment. (rather than testing it in some kind of software simulator only) However, I want the system to be autonomous and running on-board of the rover. The system I am talking about will consist of: a stereo camera a rover with wheels with one motor each possibly some kind of sensor that "measures" the movement, e.g. how much the wheels turned maybe some distance sensor Appart from this it's only software the stereo software is already developed, the SLAM algorithm not. Therefore it is currently impossible to say how much RAM it needs. I am currently running the stereo-vision only on an i7 in approx. 1s. Now my question: as mentioned I have no idea about robotics, and also my electronics knowledge is limited, so I have no idea what I need for this robot when it comes to the processor and the electronics. I read some stuff about the Raspberry Pi and Arduino boards but I have no idea what to make from this. I am afraid that a Arduino will not be able to handle the computational load of the stereo vision and the SLAM algorithm but I read that Raspberry Pis are not the first choice when interfacing with sensors is needed (in this case my stereo cameras). Also I found the Leika kit which is a robotics kit for the Raspberry Pi. Maybe this would be a good option for me? Maybe an entirely different system would be even more advisable? Possibly someone else build an equally complex system before and can give me some advise form his/her experience?
I am using the L298N motor driver to drive two HAD1 linear actuators (12V each and a no-load drive current of ~950mA each) Linear Actuator: http://www.alibaba.com/showroom/mini-linear-actuator-had1.html Motor Driver: is a L298N dual-h-bridge motor-driver controller board-module for arduino robot I am also using a current sensor per motor to get feedback of what the motor is doing (only sensors I have available, but I can detect of the motors are moving or stopped). I am using two ACS714 current sensors. The supply voltage for each is 4.5V to 5.5V and Supply Current is 10mA to 13ma: Current Sensor: is an ACS714 current sensor. And Here is the circuit diagram that I made for my actual setup (an Arduino UNO, two current sensors, to linear actuators, and one motor drive): Circuit Diagram: Will this setup work? Will I have enough current/power coming out of the 5V of the arduino to power both the L298N logic and the two ACS714 sensors?
Is it possible to build a quadcopter which can detect obstacles and thereby avoiding them in order to reach its destination? If so,how could it avoid the obstacles and how can the destination be set
I always wanted to have a CNC to make PCB quickly at home. Finally, I got a 7x7 kit from zentools recently and put it together. I attached a battery powered screw driver to 2nd shaft of the stepper and moved the each axis all the way back and forward before wiring. All 3 axis moves smoothly, I can turn the steppers even by hand. Every piece works smoothly, no mechanical jam. I decided to use GRBL as controller software. Tested the software without the shield or stepper (qv: Testing GRBL in Arduino Board without the steppers) I use Universal Gcode Sender to communicate with GRBL. I got an Arduino CNC Shield for Arduino UNO, put it together, attached to Arduino UNO, re-tested GRBL, it worked. I used Reprep's Stepper wiring article to connect stepper to the driver, wired 1 stepper to the stepper driver (X axis). Powered the shield with 20V 17.5Amp (350W) DC Regulated Power supply. (It was the power adaptor for an old 17" notebook. Notebook died, I kept the adaptor) When the move 5 steps command (G1 X5) was sent, stepper makes a small move in the direction and then makes a grinding noise. (Can be seen on Youtube) I tried switching 1st pair's cables, using another stepper driver (3 drivers), turning the potentiometer to increase the current, but still no luck. I attached 2 photos of the cnc and the controller and controller unit. I tried everything I can think of, any suggestions?
While looking at Mecanum wheels, I noticed that there are two different designs that are popular. One type holds the rollers in between the wheels frame, and the other holds the rollers from the center. Is there a significant advantage to using one over the other?
I am looking for a cheapest possible GPS setup with a centimeter precision without much HW hacking. I am not able to produce my PCB or do any soldering (though I would do that if there is no other way) so a kind of a easy-to-assemble setup would be welcome. I know about the $900 Piksi thing but that is still too expensive for me. It seems like cm precision should be possible for much less - like employing a 50 USD raw GPS sensor with an antenna and ordinary PC with RTKLIB software. I am not sure if it is better to use two GPS sensor setup for RTK (one base station and one for rover) or whether I can get the corrective DGPS data elsewhere (my region is Czech Republic - there seems to be national grid here allowing to stream correction data for reasonable cost). My application will be in a passenger car so I will not be limited with power source - no low power needed although that would be nice. I will be using the position readings within OpenCV - so I need to get the data into C/C++ code. The application is data collection so I can use raw GPS post-processing.
Is integration over not constant dt (∆time) a possible thing? Let's say you have a PID loop with differentiating frequency, can the integral part of it still work? (Assuming you know the dt from the last iteration) Could I just use a variable dt (∆time) in my calculation and the PID principle would still function correctly?
I want to build a small cylindrical arm, with a main 360º angular servo on the longitudinal axis, and a secondary angular servo with variable speed in a trasversal axis that rotates with the main angular one. The secondary needs to receive data and power from a slip ring across the main servo, since it must be able to rotate freely and must not be binded by wirings. The width of the cylindrical arm must be below 0.4 cm I've reviewed the market for off-the-shelf servos and there are a few that could fit the bill for the main servo, I know where to obtain the slip ring required, but it beats me where to obtain the secondary servo, since the space limitations demand that it is really small (< 0.2 cm) and the smallest I've been able to find on internet are 0.5 cm Any suggestions are greatly welcome!
I'm trying to build a robot with a differential drive powered by two DC Motors. First I implemented a PID Controller to control the velocity of each motor independently. Estimated the TF using the MATLAB's System Identification Toolbox, of the open loop system by the acquiring the velocity of each wheels encoder in function of the PWM signal applied by an Arduino microcontroller. All went well and i successfully dimensioned the PID gains for this controller. What I'm trying to accomplish now is to control the exact (angular) position of the DC Motor. I thought in cascading a PID controller in the input of the other already implemented. So this way, I can give a position to the first controller, which will be capable of generate an output reference to the second (velocity) controller so it generates the appropriate PWM value signal to drive the DC Motor accordingly. Will it work? Is that a good approach? Or should I try to implement a different controller which outputs the PWM signal in response to a position reference signal? Many thanks for your attention and I hope somebody can help me with these doubts.
I am currently working on an exoskeleton. The exoskeleton is going to help kids with cerebral palsy learn to walk 4 years sooner than traditional therapy. Currently we are using 2 Ame 226-3003 with the roboclaw 2x60A motor controller controlled by an Arduino mega. The Ame 226-3003 motors are not powerful enough. In addition the Ame 226-3003 has a worm gear thus the motor cannot be moved when the motor is turned off. Our position feedback system is a gear attached to the shaft of the motor which spins a gear on a potentiometer. The two gears have a 1:1 ratio. In order to better understand the project, please see the video: https://www.youtube.com/watch?v=NL_aCwJSRiE&feature=youtu.be The Ame 226-3003 catalog page: http://www.amequipment.com/wp-content/uploads/2013/02/801-1071-web.pdf We need a new drive system: more powerful than the Ame 226-3003 motor. We do not have an exact torque spec but we believe any drive system that is 70-100% more powerful than the Ame 226 - 3003. We like the rpm range of the Ame 226-3003. The drive system must be able to spin freely when the motor is not in use. We need a way to get position feedback, the potentiometer system we are using seems to work, however it adds to much extra hardware(more stuff to break), (ie) the gear on the potentiometer and the gear on the shaft have to mesh constantly and we have to zero the potentiometer every time we put the leg together so the potentiometer doesn't over spin. * We would prefer to have an optical encoder inside the motor. We need to have the drive system be at a right angle. I need help designing a drive system that will meet the requirements. I think I might have found a motor that will work: The amp flow G43-500 http://www.ampflow.com/standard_motors.htm I like the G43-500 because it can run at 24 v, thus it will take less amps than 12v. Will that motor get the job done? I need to gear this down to around 80rpm. What type of gear box would work best?
When installing a servo or other actuator, I measure the force needed to perform whatever action is needed and find an actuator that can generate more than the necessary force. However, I recently wondered if there's a rule of thumb or guideline for how much overhead is useful, before it becomes a waste. For a (perhaps oversimplified) example, say I have a lever to lift something, and the force needed is 100 Newtons. An actuator that can manage 100 N maximum will have problems with this and stall, with any sort of friction or other imperfections. I would use an actuator that can produce 150 or 200 N - whatever is available and fits the design and budget. After testing, it may become apparent that 200 is overkill, 120 is sluggish, but 150 is good. Other than trial and error, is there a way to measure this, or a rule of approximation? I realize that variables in the mechanics and construction can significantly alter what force might is needed to be considered ideal, but is there a commonly accepted value for simple applications? Something like "If you need x force, install an actuator with x + 20% force."
I want to build a closet with ejectable drawers. On the top should be 4 buttons, each eject opening one of the four drawers of the closet. I am looking for ideas on how to accomplish this. What kind of springs, slider mechanisms perhaps, and other materials to use? Any examples?
For my robotic projects I need some aluminium parts. Currently I am looking for a way to build a chassis including simple gear box. So I need relatively high precision. Which options do I have to machine aluminium without investing in expensive tools? This is what I could think of so far. Design parts in CAD and send them to a third party company for fabrication. The problem with this is that hobby projects almost never need large quantities and piece production can be still expensive. Buy cheap tools to work aluminium by hand. I don't know which tools would fit this task best. Moreover, the results might be inaccurate, which is a problem for designs with moving parts. Find someone with a CNC who let's me machine my parts. This would most likely result in very slow prototyping cycles though. A method that I can do at home with not too expensive tools would be perfect, but I'm looking forward to every solution.
I want to create a robot that will navigate on a desired path! That path can be a straight line or a circular path with a given radius. I will use 3 or 4 omni wheel drive platform and for positioning, I am using this research paper which perform dead-reckoning using mouse sensors. Dead-Reckoning using Mouse Sensors I've understood that I will get x, y and θ positions, which are actual positions of robot. These can be used to calculate the error and then using PID to compensate the error. But, to find the error, I must have the desired position of the robot at that moment! For example, the Robot is at (0,0) and it needs to move in a circular path of equation $$ x^2 + y^2 - 10y = 0 $$ Now, I want to calculate the position at t = 2 sec, how to do that? If someone has already done similar stuff, please post the link. I am not able to find any resource on web!
the matlab code is used to detect red colored object, but i want to control a bot to move towards the detected object. just need a simple algorithm or idea, controlling the servo i will be able to do it. %get snapshot data = imread('image.jpg'); % Now to track red objects in real time % we have to subtract the red component % from the grayscale image to extract the red components in the image. diff_im = imsubtract(data(:,:,1), rgb2gray(data)); %Use a median filter to filter out noise diff_im = medfilt2(diff_im, [3 3]); % Convert the resulting grayscale image into a binary image. diff_im = im2bw(diff_im,0.18); % Remove all those pixels less than 300px diff_im = bwareaopen(diff_im,300); % Label all the connected components in the image. bw = bwlabel(diff_im, 8); % Here we do the image blob analysis. % We get a set of properties for each labeled region. stats = regionprops(bw, 'BoundingBox', 'Centroid'); % Display the image imshow(data) hold on %This is a loop to bound the red objects in a rectangular box. for object = 1:length(stats) bb = stats(object).BoundingBox; bc = stats(object).Centroid; rectangle('Position',bb,'EdgeColor','r','LineWidth',2) plot(bc(1),bc(2), '-m+') a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), ' Y: ', num2str(round(bc(2))))); set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color', 'yellow'); end hold off
I am able to locate centroids of each blocks, but i am unable to join two blocks with a line segment by avoiding the obstacle as shown in the figure. Please need help how do i achieve this using matlab.
There seems to be consensus here that the BeagleBone Black has 1ms+ latency while toggling gpio pins due to the fact that gpio is handled outside of the cpu. Are the uart/i2c/spi lines equaly slow, or are they significantly faster? I've seen references to people talking to the gpio more directly. Could this decrease uart/i2c/spi latencies as well?
I'm looking for some direction on how to create a device that does the following: Imagine you have a yoga mat, I want a device that can roll it up and unroll it without a human intervening in the rolling process. I reliable this is a robotics forums but there doesn't appear to be a section for mechanical engineering so I'm posting my question here.
Need help to choose desk top, low cost, DIY/High school grade laser cutter for making base plate for DIY robots, about maximum A4 paper size, as this photo. Idea, comment, advises, even if only partially cover some and not all questions, are welcome. What power is needed to cut Acrylic 3 to 5mm thick. Many sellers at 40 to 60 watts range. What can these do? How cut thickness depends on cut speed? To what extend can I choose slow speed to cut thicker sheet. Does cut thickness depends on color/clear acrylic? It is CO2 laser. Some units have options, like air blower and honeycomb bottom plate. What are their functions? what options are useful for this case. Which CAD 2D drawing software is best supported by these range of products? Apart from main function of cutting flat acrylic plate, some has additional Z axis motor to rise/lower work piece for engrave/photo/line_letter marking on 3D objects. What software is needed to support these 3D operation.
I am currently trying to parametrize the low-level gains of a robotic arm. This arm uses a classical PID for each joint. I am trying to use a method based on computation rather than a trial-and-error/tweaking approach. The method I use considers each joint independently and assumes the system driven by the PID is linear. Hence I infer a transfer function, a characteristic polynomial, poles and this gives me gains $K_p$, $K_i$, and $K_d$ for each joint. Now, computed as I did, these gains depend on the natural angular-frequency. For example: $$ K_p = 3 a w^2 $$ where $a$ is the inertia and $w$ is the natural angular-frequency. Hence my question: how shall I compute $w$, the natural angular-frequency for my system? Is this an enormous computation involving the geometry and other complex characteristics of the robot, and that only a computer can do or are there simple assumptions to be made which can already give a rough result for $w$? I guess this is a complex computation and this is one of the reasons why PID gains are most often found by trial-and-error rather than from computation. Though I am looking for some more details on the subject to help me understand what is possible and what is not. Kind regards, Antoine
I have decided to pursue a career in automation and robotic. At the moment, I am being torn between Mechanical and Electrical Engineering. I know that both of them relate to my choices of career, and at the moment, I think that I like them equally. I hope you guys can help me solve my dilemma by using your insights/experiences to assist me with the following questions: 1/ From your experiences and opinions, which of the two engineering fields is generally more crucial and challenging, especially in an automation/robotics project? 2/ Which will see an increase in demand and importance in the near future? Which of them might become outdated/obsolete or at least develop at a slower rate compare to the other?(I have a feeling that EE has a slight edge over this matter; however, I am not so sure) 3/ Which of the fields is more versatile? Which is more physical demanding (I am actually quite frail) 4/ Which is generally easier to self-study? Robotics is obviously an incredibly broad and complex field and I have prepared to step outside of my comfort zone and do lots of studying by myself to achieve my goals and passion. I could probably come up with a few more questions; however, I am sure that you guys got the gist of my puzzle. Thank you very much and I apologize if there is any grammatical error.
I'm going through the textbook Robot Modeling and Control, learning about the DH convention and working through some examples. I have an issue with the following example. Given below is an image of the problem, and the link parameter table which I tried filling out myself. I got the same answers, except I believe there should be a parameter d1 representing the link offset between frames 1 and 2. This would be analogous to the d4 parameter. If anyone could explain why I might be wrong, or confirm that I have it right, that would be great. I hate it when it's me against the textbook lol. Cheers.
I've seen in a lot of places some methods of tuning a PID controller. Most of them will say that one should apply a step input to the system and based on that response you can tune the PID parameters following some rule of thumb. But what about a system which one of its pole is at origin? In other words, a step response on a system like that will have an infinitely increasing ramp in the output (theoretically). An example: let's say we have a spinning wheel (fixed at center) and all we can control is the amount of torque applied to make it spin. If we can read its position (angle) and we want to design a PID controller to set its position (more or less like a step-motor). How can that be done? Note that a step input in this case will be a constant torque and this will make the wheel spin faster and faster. How should one proceed?
I've been working on my two-wheeled mobile robot I've been trying to perfect my obstacle avoidance algorithm which is Artificial Potential Field method . Also i use Arduino Uno kit . The basic concept of the potential field approach is to compute a artificial potential field in which the robot is attracted to the target and repulsed from the obstacles. The artificial potential field is used due to its computational simplicity. the mobile robot applies a force generated by the artificial potential field as the control input to its driving system . the Artificial Potential Field method in its computations depends on the distance between robot and goal or target and the distance between robot and obstacles that effected the robot (which could easily get for ultrasonic senors) I applied the Artificial potential field method in Matlab environment / simulation and it is done successfully , really what I need in the simulation is to get the current position of mobile robot and position of goal as x, y coordinates (to compute the distance between robot and goal) and the obstacles positions. The output of the Artificial potential field is the desired angle to avoid obstacle and reach to the goal , the method give the robot the angle the pointed to the goal then the robot goes toward that angle and if the robot face an obstacle in his way (got from sensor reading) the Artificial potential field will update the angle to avoid the obstacle and then re-give the robot the angle that pointed to the goal and so on. The question is how could I apply the Artificial potential field method in real would? what should I get? is it easy to do that or it is impossible? I had Rover 5 with two normal DC motors and two encoders (incremental rotary encoder) per each wheel. Any Help or suggestion on the topic will be highly appreciated please. Edit: Based on the response from Shahbaz. The case is very simple, but first, there is something to know that I constrained with some limitations that I couldn't overstep them, one of them is that the real world should be exactly as simulation for example in the simulation I consisted that robot started with (0,0) on coordinates x, y axis and I should put the goal point for example (10,20) and feed this point in the Artificial potential field method and then compute distance between robot and goal (so I don't need any technique to determine the position of goal) and I don't know if I could applied that. The second constraint is that I should use the encoders of wheels to determine the current position of mobile robot and its orientation depending on a calculation formula (something like this here) even if that will be inaccurate. I had a Rover 5 with two normal DC motors and two encoders (incremental rotary encoder) per each wheel, each encoder has four wires I don't know how to deal with them yet, and how could I translate the pulses of encoders or how to work out the x.y position of your robot based on the shaft encoder data. I am still searching for ….
I've been going through Syskit tutorials at rock-robotics.org. In the tutorials e.g. First composition, there are two different components declared with: add Controldev::JoystickTask, :as => "cmd" add RockTutorial::RockTutorialControl, :as => "rock" I was wondering how could I add an additional RockTutorialControl into the composition, so that the instantiation would then create two separate instances of the same component? I've tried something like add RockTutorial::RockTutorialControl, :as => "foo" but this apparently isn't the way to go. syskit instanciate command shows only one instance of RockTutorialControl, but gives two roles to it (rock and foo). What is the meaning of "role" in this context? I've noticed that the tutorial explains how to make multiple instances of the same component when we're declaring our components as Devices. But how to do this with components that should not be concerned as devices? BR, Mathias EDIT: This was my first question to StackExchange, and I don't know what's the policy for adding additional information to the original question, but here we go: It seems that both the deployment and configuration need to be different when there are two instances of the same component. I did a small scale testing with two components: using_task_library 'foobar_user' using_task_library 'foobar_proxy' module FooModule class FooControl < Syskit::Composition add FoobarUser::Task, :as => "producer" add FoobarProxy::Task, :as => "proxy" add FoobarUser::Task, :as => "consumer" producer_child.connect_to proxy_child proxy_child.connect_to consumer_child end end where FoobarUser::Task has an input & output port of /std/string. FoobarProxy::Task has corresponding i&o ports. FoobarUser::Task has also two configurations called 'default' and 'other'. It also has two deployments 'foo_depl' and 'bar_depl'. In order to create a "pipeline" where data flows producer ==> proxy ==> consumer, I made define line: define 'conf_and_depl', FooModule::FooControl.use('producer' => FoobarUser::Task.use_conf('other').prefer_deployed_tasks(/foo_depl/), 'consumer' => FoobarUser::Task.use_conf('default').prefer_deployed_tasks(/bar_depl/)) and then instanciated the network with syskit instanciate scripts/03-nwtest.rb conf_and_depl_def! The component instanciation failed if either use_conf or prefer_deployed_tasks clause was left out. In both cases the produced error was "cannot deploy the following tasks... ...multiple possible deployments, choose one with #prefer_deployed_tasks".
I am willing to use a universal robot arm (UR10) in a path following mode. i.e. I have a desired trajectory for the robot's effector and I would like the effector to follow it as close as possible. The specs here give a repeatability of +-0.1mm. This is not written but I guess this is the static precision (after the robot had enough time to converge to the position). Now what about the dynamic precision (i.e. max position error while performing the desired trajectory)? Does anyone know more than me on this matter? Kind regards, Antoine.
I am trying to control the servo motor operation by torque control by interfacing the sensor to an avr , which will continuously monitor the torque value from the sensor and control the torque according to the given set point .Is it possible to make such a setup? If yes how? Thanks.
I need to pop out a needle like object(toothpick,matchstick,etc) from a hole in a surface and push it back in automatically.I need to make a array of such needles in which each needle's position can be controlled individually.The objects aren't supposed to be oscillated continuously, instead they are to be locked in one of the two positions-either above the surface or inside it. I am trying to search a mechanism to achieve this.This can be easily done with a simple DC servo motor, but the problem is I have to do this in very limited space-about 6 such objects in base area of 3 cm x 3 cm.Moreover the power source would be DC +5 V So far I have thought of creating small electromagnets with springs,but still not sure about it.Any inputs will be appreciated.
I am designing a robot in real world and i want to plot everything in X,Y (Cartesian) coordinates I just want to use the encoders of wheels to determine the current position of mobile robot and its orientation depending on a specific calculation formula (like this http://rossum.sourceforge.net/papers/DiffSteer/ ) even if that will lead to inaccurate calculations . Actually , I found out this formula below to compute x, y coordinates from encoder data but I still confused in some sides of this formula I had a Rover 5 chassis form Dagu with two normal DC motors and two encoders (incremental rotary encoder) per each wheel, how could I translate the pulses of encoders or how to work out the x.y position of the robot based on the shaft encoder data. I deduced some of values from Rover 5 chassis : cm = conversion factor that translates encoder pulses into linear wheel displacement Dn = nominal wheel diameter (in mm) : about 20 Cm Ce = encoder resolution (in pulses per revolution) : Encoder resolution: 1000 state changes per 3 wheel rotations n = gear ratio of the reduction gear between the motor (where the encoder is attached) and the drive wheel. : Gearbox ratio: 86.8:1 In Rover 5 chassis there are 4 small wires with female headers. RED is +5V for the encoder , BLACK is 0V (ground) , WHITE is signal A , YELLOW is signal B . The important wires in each encoder are signal A and signal B ,so How to get values of NL , NR in the formula above from signal A & signal B ? Is the value of NL is the direct value from wire signal A or signal B ? the same question for NR . Thanks a lot
I wonder if this would be a competitive robot compared with one made with a traditional approach using a microcontroller and infrared sensors. I suppose that raspberry can perform an edge detection to tell the dynamic of the line far away, much more that the infrared sensor, but how fast can the raspberry do this process? should be a relative simple process in terms of computational requirements , an edge detection in a high contrast arena. Probably the bigger issue would be get the relative position of the robot respect to the line, may be a combination of the camera with some infrared sensors would work better, and what about the size? the robot will be significantly bigger when is used a camera and a raspberry for the control.
So far I have done EKF Localization (known and unknown correspondences) and EKF SLAM for only known correspondences that are stated in Probabilistic Robotics. Now I moved to EKF SLAM with unknown correspondences. In the algorithm in page 322, 16.     $\Psi_{k} = H^{k} \bar{\Sigma}[H^{k}]^{T} + Q$ 17.     $\pi_{k} = (z^{i} - \hat{z}^{k})^{T} \Psi^{-1}_{k}(z^{i} - \hat{z}^{k})$ 18.     $endfor$ 19.     $\pi_{N_{t+1}} = \alpha$ 20.     $j(i) = \underset{k}{argmin} \ \ \pi_{k}$ 21.     $N_{t} = max\{N_{t}, j(i)\}$ I don't understand the line 19. In the book page 323, The authors state Line 19 sets the threshold for the creation of a new landmark: A new landmark is created if the Mahalanobis distance to all existing landmarks in the map exceeds the value $\alpha$. The ML correspondence is then selected in line 20. what is $\alpha$ in line 19 and how is it computed? Also, what is the Mahalanobis distance? I did research about Mahalanobis distance but still I can't understand its role in EKF SLAM. Edit: I found another book in my university's library Robotic Navigation and Mapping with Radar The authors state The Mahalanobis distance measure in SLAM is define as $d^{2}_{M}(z^{j}_{k}, \hat{z}^{i}_{k})$, which provides a measure on the spatial difference between measurement $z^{j}_{k}$ and predicted feature measurement $\hat{z}^{i}_{k}$, given by $$ d^{2}_{M}(z^{j}_{k}, \hat{z}^{i}_{k}) = (z^{j}_{k} - \hat{z}^{i}_{k})^{T} S^{-1}_{k}(z^{j}_{k}, \hat{z}^{i}_{k}) $$ This value has to be calculated for all possible $(z^{j}_{k}, \hat{z}^{i}_{k})$ combinations, for which $$ d_{M}(z^{j}_{k},\hat{z}^{i}_{k}) \leq \alpha $$ Often referred to as a validation gate. Leave me to the same question what is $\alpha$?
Regarding my project work, I have to write an algorithm for mobile robot planning. For that, I have chosen Genetic algorithm. Is it good for mobile robot path planning? If it is, then where can I start from and get some guidelines?
I am trying to make a line follower robot and I need help regarding the type of dc motor to use. So we have a single shaft BO Motor and a double shaft BO Motor. Can anyone help me understand what is the difference between the two? Here's the link for Single Shaft BO Motor: http://www.evelta.com/industrial-control/motors-and-accessories/100-rpm-l-type-single-shaft-bo-motor Double Shaft BO Motor: http://www.evelta.com/industrial-control/motors-and-accessories/100-rpm-l-type-double-shaft-bo-motor
Simply, I had Rover 5 with 2 DC motors and 2 quadrature encoders, I just want to use encoders to measure the distance of travelling for each wheel. To start with, I just want to determine the total counts per revolution. I read the article about quadratic encoder from this broken link. In Rover 5, each encoder has four wires: red (5V or 3.3V), black(Ground), yellow (Signal 1) and white (Signal 2). I connected each wire in its right place on Arduino Uno board, using the circuit: rotary encoder ChannelA attached to pin 2 rotary encoder ChannelB attached to pin 3 rotary encoder 5V attached to 5V rotary encoder ground attached to ground For one encoder, I test the code below to determine the total counts or ticks per revolution, the first program by using loop and second by using an interrupt. Unfortunately while I run each program separately, rotating the wheel 360 degree by hand, the outputs of these two programs was just "gibberish" and I don't know where is the problem . Could anyone help? Arduino programs posted below. First program: // Constants const int ChanAPin = 2; // pin for encoder ChannelA const int ChanBPin = 3; // pin for encoder ChannelB // Variables int encoderCounter = -1; // counter for the number of state changes int ChanAState = 0; // current state of ChanA int ChanBState = 0; // current state of ChanB int lastChanAState = 0; // previous state of ChanA int lastChanBState = 0; // previous state of ChanB void setup() { // initialize the encoder pins as inputs: pinMode(ChanAPin, INPUT); pinMode(ChanBPin, INPUT); // Set the pullup resistors digitalWrite(ChanAPin, HIGH); digitalWrite(ChanBPin, HIGH); // initialize serial communication: Serial.begin(19200); Serial.println("Rotary Encoder Counter"); } void loop() { // read the encoder input pins: ChanAState = digitalRead(ChanAPin); ChanBState = digitalRead(ChanBPin); // compare the both channel states to previous states if (ChanAState != lastChanAState || ChanBState != lastChanBState) { // if the state has changed, increment the counter encoderCounter++; Serial.print("Channel A State = "); Serial.println(ChanAState); Serial.print("Channel B State = "); Serial.println(ChanBState); Serial.print("State Changes = "); Serial.println(encoderCounter, DEC); // save the current state as the last state, //for next time through the loop lastChanAState = ChanAState; lastChanBState = ChanBState; } } The second program (with interrupt) static long s1_counter=0; static long s2_counter=0; void setup() { Serial.begin(115200); attachInterrupt(0, write_s1, CHANGE); /* attach interrupt to pin 2*/ attachInterrupt(1, write_s2, CHANGE); /* attach interrupt to pin 3*/ Serial.println("Begin test"); } void loop() { } void write_s1() { s1_counter++; Serial.print("S1 change:"); Serial.println(s1_counter); } void write_s2() { s2_counter++; Serial.print("S2 change:"); Serial.println(s2_counter); }
I am migrating from a differential drive design to a skid steering design for my robot, and I want to know how easy would it be to use the NavStack with skid steering. Would there be any problems in terms of localization and things like that? If I let two wheels on the same side of my robot (two on left side and two on the right side) maintain same velocity and acceleration, would the unicycle model of a differential drive robot still apply for skid steering?
I'm trying to understand the core differences between the two topics. Is one simply a newer term? Connotations of automobile vs automation? Something with a screen vs without? I've only ever heard the term computer vision (tagged).
Need to buy a DIY/High School grade laser cutter/engraver How much laser power is needed for wood, acrylic (3 to 6mm thick), cutting and decorative engraving? What parameters I need to take care in selecting suitable machines?
While I am reading and collecting information about rotary encoders , I faced some troubles about the meaning of some expressions concerned with encoder ,which make me to be confused and stray, these expressions or words are : -Count per revolution (rotation) -Pulse per revolution -Tick per revolution -Transitions per revolutions -Number of transitions -Number of state changes I thought the transition is same as state changes which means change from high to low or low to high , but what about the others what is the diffenece among them (count , tick ,pulse ,transition .... etc)? and what the relationship between transitions and pulse ? Could anyone clarify that , please
I recently spent some work on my quadcopter firmware. The model is stabilizing its attitude relatively well now. However I noticed, that it is changing its altitude sometimes (maybe pressure changes, wind or turbulence). Now I want to get rid of these altitude drops and found not much literature. My approach is using the accelerometer: Calculates the current g-force of the z-axis if the g-force is > 0.25 g and longer than 25 ms, then I feed the accelerometer term (cm per s²) into the pid the output is sent to the motors The model now reacts when it is falling down with an up-regulation of the motors. However, I am not sure, whether it is smart to feed the current acceleration into the regulator and I currently wonder, whether there is a smarter method to deal with sudden and smaller changes in altitude. Current code: # define HLD_ALTITUDE_ZGBIAS 0.25f # define HLD_ALTITUDE_ZTBIAS 25 const float fScaleF_g2cmss = 100.f * INERT_G_CONST; int_fast16_t iAccZOutput = 0; // Accelerometer // Calc current g-force bool bOK_G; float fAccel_g = Device::get_accel_z_g(m_pHalBoard, bOK_G); // Get the acceleration in g // Small & fast stabilization using the accelerometer static short iLAccSign = 0; if(fabs(fAccel_g) >= HLD_ALTITUDE_ZGBIAS) { if(iLAccSign == 0) { iLAccSign = sign_f(fAccel_g); } // The g-force must act for a minimum time interval before the PID can be used uint_fast32_t iAccZTime = m_pHalBoard->m_pHAL->scheduler->millis() - m_iAccZTimer; if(iAccZTime < HLD_ALTITUDE_ZTBIAS) { return; } // Check whether the direction of acceleration changed suddenly // If so: reset the timer short iCAccSign = sign_f(fAccel_g); if(iCAccSign != iLAccSign) { // Reset the switch if acceleration becomes normal again m_iAccZTimer = m_pHalBoard->m_pHAL->scheduler->millis(); // Reset the PID integrator m_pHalBoard->get_pid(PID_ACC_RATE).reset_I(); // Save last sign iLAccSign = iCAccSign; return; } // Feed the current acceleration into the PID regulator float fAccZ_cmss = sign_f(fAccel_g) * (fabs(fAccel_g) - HLD_ALTITUDE_ZGBIAS) * fScaleF_g2cmss; iAccZOutput = static_cast<int_fast16_t>(constrain_float(m_pHalBoard->get_pid(PID_ACC_RATE).get_pid(-fAccZ_cmss, 1), -250, 250) ); } else { // Reset the switch if acceleration becomes normal again m_iAccZTimer = m_pHalBoard->m_pHAL->scheduler->millis(); // Reset the PID integrator m_pHalBoard->get_pid(PID_ACC_RATE).reset_I(); }
I have started in the programming stage of my project , and my first step is to made and test the odometry of my Rover 5 robot on Arduino Uno by using encoders to determine position and orientation . I wrote this code and I don’t know if that code right or there are some mistakes, because I am novice to Arduino and Robotic field so I need for some suggestions and corrections if there were . thanks a lot Arduino codes posted below. #define encoder1A 0 //signal A of left encoder (white wire) #define encoder1B 1 //signal B of left encoder (yellow wire) #define encoder2A 2 //signal A of right encoder (white wire) #define encoder2B 3 //signal B of right encoder (yellow wire) volatile int encoderLeftPosition = 0; // counts of left encoder volatile int encoderRightPosition = 0; // counts of right encoder float DIAMETER = 61 ; // wheel diameter (in mm) float distanceLeftWheel, distanceRightWheel, Dc, Orientation_change; float ENCODER_RESOLUTION = 333.3; //encoder resolution (in pulses per revolution) where in Rover 5, 1000 state changes per 3 wheel rotations int x = 0; // x initial coordinate of mobile robot int y = 0; // y initial coordinate of mobile robot float Orientation = 0; // The initial orientation of mobile robot float WHEELBASE=183 ; // the wheelbase of the mobile robot in mm float CIRCUMSTANCE =PI * DIAMETER ; void setup() { pinMode(encoder1A, INPUT); digitalWrite(encoder1A, HIGH); // turn on pullup resistor pinMode(encoder1B, INPUT); digitalWrite(encoder1B, HIGH); // turn on pullup resistor pinMode(encoder2A, INPUT); digitalWrite(encoder2A, HIGH); // turn on pullup resistor pinMode(encoder2B, INPUT); digitalWrite(encoder2B, HIGH); // turn on pullup resistor attachInterrupt(0, doEncoder, CHANGE); // encoder pin on interrupt 0 - pin 3 Serial.begin (9600); } void loop() { distanceLeftWheel = CIRCUMSTANCE * (encoderLeftPosition / ENCODER_RESOLUTION); // travel distance for the left and right wheel respectively distanceRightWheel = CIRCUMSTANCE * (encoderRightPosition / ENCODER_RESOLUTION); // which equal to pi * diameter of wheel * (encoder counts / encoder resolution ) Dc=(distanceLeftWheel + distanceRightWheel) /2 ; // incremental linear displacement of the robot's centerpoint C Orientation_change =(distanceRightWheel - distanceLeftWheel)/WHEELBASE; // the robot's incremental change of orientation , where b is the wheelbase of the mobile robot , Orientation = Orientation + Orientation_change ; // The robot's new relative orientation x = x + Dc * cos(Orientation); // the relative position of the centerpoint for mobile robot y = y + Dc * sin(Orientation); } void doEncoder(){ // ---------- For Encoder 1 (Left) ----------- if (digitalRead(encoder1A) == HIGH) { // found a low-to-high on channel A if (digitalRead(encoder1B) == LOW) { // check channel B to see which way // encoder is turning encoderLeftPosition = encoderLeftPosition - 1; // CCW } else { encoderLeftPosition = encoderLeftPosition + 1; // CW } } else // found a high-to-low on channel A { if (digitalRead(encoder1B) == LOW) { // check channel B to see which way // encoder is turning encoderLeftPosition = encoderLeftPosition + 1; // CW } else { encoderLeftPosition = encoderLeftPosition - 1; // CCW } } // ------------ For Encoder 2 (Right)------------- if (digitalRead(encoder2A) == HIGH) { // found a low-to-high on channel A if (digitalRead(encoder2B) == LOW) { // check channel B to see which way encoder is turning encoderRightPosition = encoderRightPosition - 1; // CCW } else { encoderRightPosition = encoderRightPosition + 1; // CW } } else // found a high-to-low on channel A { if (digitalRead(encoder2B) == LOW) { // check channel B to see which way encoder is turning encoderRightPosition = encoderRightPosition + 1; // CW } else { encoderRightPosition = encoderRightPosition - 1; // CCW } } }
I want to power my Arduino Uno and I know I can do that either by connecting it with USB to PC or with DC power supply.But I want to connect it with a battery source(kindly see the image below) and I know its a silly question but how do I do it? The battery connector is not the regular DC jack but the one that's found in RC toys. So how do I power my Arduino with that battery? And also how do I connect it with a DC power supply adapter to charge it once its discharged? Please also mention the specifications of the DC power supply adapter that is to be used while charging this battery.
I'm new to robotics. I would like to know if 56 output lines can be taken from an arduino or raspberry pi?
Does anyone know of a robotics developer environment ideal for testing AI programs for drones (e.g. quadrocopters, planes, helicopters, etc.)? I would like something like Microsoft Robotics Developer Studio that includes a virtual environment (such as an outdoor environment with gravity, wind, etc.) to test out flight dynamics. I would like the options to add sensors to the virtual drone, such as gps, altimeter, gyros, etc. that the AI program can then use to steer the drone.
Sebastian Thrun says in his paper on Particle Filters that - no model however detailed fails to represent the complexity of even the simplest of robotic environment. What does he means by this? Can someone please elaborate?
I'm looking for a research paper or series of papers that compare the performance of various simultaneous localization and mapping algorithms for rovers in a variety of real world environments. In particular, i'm looking for computational speed, accuracy (compared to the real world environment) and memory & power efficiency metrics. Is there a journal that regularly publishes experimental performance comparisons?
I'm working on a quadcopter. I'm reading the accelerometer and gyro data out from the MPU6050 and using complementary filter to calculate the roll and pitch values. When the quad is on the floor, and the motors are turned on the roll values are: -4.88675227698 -5.07656137566 7.57363774442 -3.53006785613 4.44833961261 -2.64380479638 -3.70460025582 It is very messy. After minus five there is plus seven. I would like to filter out this too high/low values programmatically but I have no idea how to do it. EDIT: At this moment I think the solution is the Low-pass filter. I'll let you know if it is successful or not.
What free of charge Robot Magazine, Journal, Newsletter or similar publication are available? Either geared toward technical professionals or the general public.
The aim is to guide a bot from Source S to Goal G while passing through all the checkpoints @ (in any order). ######## #@....G# ##.##@## #[email protected]# #@.....# ######## One way to solve it would be to select one checkpoint as goal from current state and then guide the bot to it. Then select the next checkpoint as goal and current checkpoint as source and guide the bot to its new goal. Eventually guide it to the state G from the last checkpoint.But this technique relies heavily on the order of checkpoints traversed. I would like to know if a good heuristic can be found to decide which checkpoint to go to next?
I am currently studying the FREAK descriptor and I read the article published by its designers. It states that the aim was to mimic the retinal topology, and one of the advantages that could be gained is the fact that retinal receptive fields overlap, which increases the performance. I thought about it a lot, and the only explanation I was able to come up with is the fact that, looking at this problem from an implementation point of view, a receptive field is the ensemble of an image patch centred around a pixel, plus the standard deviation of the Gaussian filter applied to this patch. The size of the receptive field represents the value of the standard variation. The bigger the size is, the more pixels will be taken into consideration when Gaussian filtering, and so we "mix" more information in a single value. But this guess of mine is very amateurish, I would appreciate it if someone could give an explanation from what goes on in the field of image processing-computer vision-neuroscience.
I'm usig RPI and Servoblaster to control servos. I've set the --step-size to 2 us, but I'd like to decrease it to 1us. I've tried to set the step-size to 1us, but the Servoblaster displays: Invalid step-size specified. I've also tried to set the pulse width in micoseconds like echo 1=1140us > /dev/servolaster. It works, but it's unpredictabe (step size is set to 2us): echo 1=1140us > /dev/servoblaster - motor starts spinnig echo 1=1142us > /dev/servoblaster - motor **smoothly** speeds up echo 1=1144us > /dev/servoblaster - motor's speed has not changed echo 1=1146us > /dev/servoblaster - motor smoothly speeds up (OK, assume that it can be changed by +/- 4) BUT: echo 1=1150us > /dev/servoblaster - motor's speed has not changed - why?? echo 1=1152us > /dev/servoblaster - motor speeds up, but **fastly** echo 1=1156us > /dev/servoblaster - motor **smoothly** speeds up Motor: Turnigy aerodrive 2830-11, ESC: Turnigy Multistar 30A Any idea?
I have known map of the environment (2d occupancy grid map). I am trying to find if anything changed in environment using 2d laser while navigating by using maximum likelihood of laser with known map. My question is how to know which measurements are corresponding to changes. My environment is not static and has some changes which is differs from known map. Now i am trying to find which objects newly came into the environment or moved out of the environment using laser.
I have a robotic simulator that enables a 6 wheel rover to perform spot turn. To prepare the rover to spot turn, I have to arrange/align the wheels in such a fashion: <front side> // \\ || || \\ // <rear side> What is the technical name of it? Circular wheel arrangement? Circular alignment?
I've seen several examples of SLAM algorithms (EKF SLAM, Graph SLAM, SEIF SLAM) written in terms of the velocity motion model. I have yet to see an example of any SLAM algorithm utilizing the odometry motion model. I wonder if there is an inherent advantage to using the velocity motion model over the odometry model for this problem. Does it have something to do with the fact that odometry sensor information comes after the motion has already taken place, whereas velocity control commands are executed before motion?
I'm trying to understand the role of landmarks in SLAM algorithms. I've glanced over a few books concerning landmark based SLAM algorithms and I've come up with a rudimentary understanding which I believe is flawed. How I think SLAM works: As I understand it, landmarks are a set of points in a map whose locations are known a priori. Furthermore, the number of landmarks in a map is fixed. The number of landmarks detected at any one time may change, but the number of landmarks that exist in the map remains static at all times. My understanding is that SLAM algorithms exploit the fact that these points are both uniquely identifiable and known a priori. That is, when a robot senses a landmark, it knows exactly which landmark it detected and thus knows the exact location of that landmark. Thus, a slam algorithm uses the (noisy) distance to the detected landmarks (with known location) to estimate its position and map. Why I think I'm wrong In my naive understanding, the usefulness of SLAM would be limited to controlled environments (i.e. with known landmarkds) and completely useless in unknown environments with no a priori known landmarks. I would presume that some sort of feature detection algorithm would have to dynamically add landmarks as they were detected. However, this fundamentally changes the assumption that the number of given landmarks must be static at all times. I know I'm wrong in my understanding of feature based SLAM, but I'm not sure which of my assumptions is wrong: Do feature based SLAM algorithms assume a static number of landmarks? Do the landmarks need to be known a priori? Can they be detected dynamically? And if so, does this fundamentally change the algorithm itself? Are there special kinds of SLAM algorithms to deal with unknown environments with an unknown total number of landmarks in it?
How are several channels multiplexed down to a single physical wire? If two channels are transmitting the same value in the same frame, wont there be an overlap of the pulses?
I'm a newbie in RC field..I am planning to construct my first Tricopter ever. Can anyone help me to find the power rating to select the motor for a tricopter? I am at the beginning stage of construction. Arm length of frame: 50cm each. I need a thrust of about 2Kg -- nearly 666 gms for each motor.
I have a big miss conception between Yaw and attitude ? Isn't both represent "how far is the quad from earth ?" Also if you could post how to calculate them from IMU (gyro +accele + magent )
it's been while since I started reading about INS, orientation and so for quadrotors . I faced the following terms : AHRS - Attitude - Yaw,Pitch and Roll - MARG sensors I know for example how to calculate Yaw,Pitch and Roll , but does it related to Attitude ? What's Attitude any way and how it get calculated ? AHRS "Attitude and heading reference system" does it formed from Yaw,Pitch and Roll ? MARG(Magnetic, Angular Rate, and Gravity) ? how it's related to other terms ? What about INS ( Inertial Navigation Systems ) ? My questions here are about these concepts, and there meaning , how they cooperate with each other , how they got calculated and which sensors suits for what ?
I read somewhere that in the case of photoshop for example, the size refers to the number of pixels an image contains, but resolution involves the pixel's size, I don't know whether this definition goes for all the other fields. In computer vision, what's the difference between image size and image resolution?
I am new to Embedded, starting with AVR programming using C. I am working on Mac OS 10.9.4, so far I am using avrdude and xCode as IDE. It works very well, for now I am testing my code using Proteus. But now I want to burn my .hex to AVR ATMega16 board. I have USBasp, which I am able to connect and it lights up the board. Now after searching on the internet, I think Mac is not detecting my board. I have checked /dev directory, but no usb device found. So I am not sure what to next, how to make Mac detect my board and burn my .hex on it. I've found this: http://www.fischl.de/usbasp/ but no idea how to use this or its required or not. So question stand is: how to make Mac detect AVR board using USBasp and burn program to it? FYI: I've installed CrossPack on Mac.
I am trying to build a servo-controlled water valve. Max pressure 150 psi , valve size 1/2". Can anyone recommend a suitable 1/4-turn valve, either ceramic, ball valve, or anything else that is easy to turn, even under pressure? It must require very little torque to turn, so a standard servo can rotate it with a small lever attached.
I have just built my first quadcopter, and have run into a bit of a snag. When I plug in the power, I only get one beep and a red blink from the flight control board, and nothing else happens. When I turn on the controller, however, a red light turns on on the reciever. Otherwise, nothing else happens. From what I can tell, I have plugged in everything correctly, and am not sure how to proceed. flight control board Flight Control Board manual (PDF) ESC's I do not have a connection from the power distribution board to the flight control board, because I am assuming that it gets its power from the ESC's. Here is the video I used to figure out how to build a quad. (side- note about the video: I have not cut the ESC's cords as done in the guide, seemed like a silly step, also I have seen other applications where they were not cut) I have not updated the firmware on the board, i have put it in out of the box here is the board's user manual (PDF)
How to calculate attitude from IMU ? For example, mathematical equations
I am trying to create a simulation of a robot with Ackerman steering (the same as a car). For now I'm assuming that it's actually a 3-wheeled robot, with two wheels at the back, and one steering wheel at the front: Knowing the wheel velocity, and the steering angle a, I need to be able to update the robot's current position and velocity with the new values at time t+1. The obvious way to do this would be to calculate the position of the centre of rotation, where the axles of the wheels would meet, however, this leads to an undefined centre of rotation when a = 0. This means that the model doesn't work for the normal case of the robot just driving in a straight line. Is there some other model of Ackerman steering which works over a reasonable range of a?
Im trying to develop a system that autonomously navigates a large outside space, where accuracy is vital (GPS is too inaccurate). There are a number of options but have largely been used inside, has anyone tried these or used anything else? WiFi triangulation, Dead reckoning, RFID landmarks
I'm currency a Web programmer and I'm very passionate by robotics and specialty for Artificial Intelligence. I have already make some C++ program for Microship and Arduino for little robots and other Lisp codes (example for labyrinth path search) but I think it's not really applicable for projects further. I have read a lots for artificial neural network to create artificial mind, but it's very theoretical and I have no idea to reproduce that on code. Someone have a idea to help me, a specific language, or just a C++ library ? If you have some links, articles, or other tutorials I take it. Thank a lots !
Simply , how can I calibrate IMU unit ? I read some papers about this topic and was wondering if there are any standard methods.
I am having some trouble understanding how to practically use the speed-torque curve of a DC motor. I understand that the gradient of the speed-torque curve is defined by the design of the motor, the exact position of the curve depending on the voltage applied. So if the voltage is changed the speed-torque curve is also changed but remains parallel to the initial curve before the voltage was changed. See figure below. So my intuitive guess is that when using the motor at a given desired operation point (desired speed and desired torque), the corresponding speed-torque curve Cd has a gradient specified in the data sheet of the motor and passes through the operation point. This curve Cd is obtained at a corresponding voltage Vd. See diagram below. So my next guess is that in order to have the motor operate at this desired operation point, you have to set the voltage applied to the motor to Vd, and apply a current Id (computed using the torque and the torque constant). Now from what I read this is not what is done in DC motor controllers. These seem to only drive the motor using current and some sort of PWM magic as is shown in the following diagram by maxon. Anyone knows why voltage is not used in DC motor control and only current is? I do not understand how you can set the speed if you do not modify the voltage? And what is PWM useful for? I have looked for hours over the internet and could not find anything relevant. Thanks, Antoine.
I have a 7 DOF arm that I am controlling with joint velocities computed from the Jacobian in the standard way. For example: $$ {\Large J} = \begin{bmatrix} J_P \\J_O \end{bmatrix} $$ $$ J^{\dagger} = J^T(JJ^T)^{-1} $$ $$ \dot{q}_{trans} = J^{\dagger}_P v_{e_{trans}} $$ $$ \dot{q}_{rot} = J^{\dagger}_O v_{e_{rot}} $$ $$ \dot{q} = \dot{q}_{trans} + \dot{q}_{rot} $$ However, when specifying only translational velocities, the end-end effector also rotates. I realized that I might be able to compute how much the end-effector would rotate from the instantaneous $\dot{q}$, then put this through the Jacobian and subtract out its joint velocities. So I would do this instead of using the passed in $v_{e_{rot}}$: $$ v_{e_{rot}} = R(q) - R(q+\dot{q}_{trans}) $$ Where $R(q)$ computes the end-effector rotation for those joint angles. Is this OK to do, or am I way off base? Is there a simpler way? I am aware that I could also just compute the IK for a point a small distance from the end-effector with no rotation, then pull the joint velocities from the delta joint angles. And that this will be more exact. However, I wanted to go the Jacobian route for now because I think it will fail more gracefully. A side question, how do I compute $R(q) - R(q+\dot{q}_{trans})$ to get global end-effector angular velocity? My attempts at converting a delta rotation matrix to Euler angles yield wrong results. I did some quick tests and implemented the above procedure to achieve pure end-effector rotation while maintaining global position. (This is easier because $T(q) - T(q+\dot{q}_{rot})$ is vector subtraction.) And it did kind of work.
We are working on a project where we want to sound an alarm if somebody is messing around with our Robot (e.g., the Robot is being shaken abruptly or the cameras/LIDARs are blocked). I am using "loud speakers" (4.1 x 3 inch 10 Watts 8 ohm speakers), but they are not loud enough. Are there any small speakers or alarm systems small enough, but loud enough (closed to a car alarm) that you would recommend? Ideally something that I can just plug into the robots computer, or interface with through a microcontroller. Either one would be fine.
I am bulding a quadcopter using these compenents: Microcontroller: Tiva C Lanchpad (ARM Cortex M4 - 80 MHz), but running at 40MHz in my code MPU 9150 - Sensorhub TI ESC - Hobbywing Skywalker 40A I use the sample project comp_dcm from Tivaware and use that angles for my PID which running at 100Hz I test PID Control on 2 motors, but the motors oscillate as in the video i found on youtube from one guy! Quadcopter Unbalance
What are the parameters which should be selected to choose camera for lane detection system.What parameters should be kept in mind (like picture quality,frame rate,cost e.t.c). Which camera will suit best to my application.
Using the Adafruit 9DoF module I Need to convert the Accel + Magneto + Gyro into Euler Angles for a motion capture application. Any hints on where to start? Managed to get X,Y,Z when the IMU is facing upward but when that orientation changes the axes dont behave normally that is because i am not using Euler angles. So Any hints to any reference where to start? The Euler Compass App is an example of what I am trying to get to. Get Pitch,Yaw, Roll for the IMU module irrespective to how its kept.
I am trying to implement a particle filter for a robot in Java. This robot is having a range sensor. The world has 6 obstacles - 3 in the top and 3 in bottom. I am calculating the distance of the robot from each obstacle's center and then performing the same activity for each particle. Then, i calculate the difference between the robot and particles. The particles for which the difference with robot measured distance are small, i give them higher probability in resampling. But, the problem with this approach as told by my friend is that I am assuming that I already know the locations of the obstacles, which make this all process useless. How should I approach it rather in a sense that I don't know the obstacles. How can the particle filter be implemented then? How will the particle filter work in case i don't know the obstacles location? An example of process would be great help. Thanks
i saw this maze, and tried to apply pledge algorithm on it. But i am not able to solve this maze using this algorithm. what am i missing? i want to ask, what i am doing wrong? PLEDGE ALGORITHM: in both cases we don't get to exit. you can read about these algorithms at: http://en.wikipedia.org/wiki/Maze_solving_algorithm http://www.astrolog.org/labyrnth/algrithm.htm
What are the most basic skills and components needed for creating a robot which gets two "yes" or "no" inputs with two push buttons, and goes down the defined flowchart and plays the relevant audio file each time it gets an input. a flowchart like this: ____question 1_____________ | | Yes No | | question 2___ question 3______ | | | | Yes No Yes No | | | | question 4 question 5 question 6 question 7 ...
I am trying to build low cost and precise outdoor positioning. I explored NS-RAW with RTKLIB - this would be doable but probably will need either a base station to get the correction data for rover or external correction data which may be a hassle. The action radius with own base station is quite limited too. This solution is not really straightforward while you have to deal with either in-house or streamed correction data. I am wondering whether one would be able to substantially improve the accuracy of an ordinary (uncorrected) GPS+GLONASS device (maybe one found in a common smartphone) with stereo visual odometry. Today's consumer GNSS chips seem to have reasonably stable accuracy in the 5m range. The VISO 2 library has a translation error of about 3% on 500m distance. The idea is to use the visual odometry for "smoothing" the rough GPS track. The question is how this can be technically done in terms of SW. The input would be two tracks - one from GPS device and the other VISO 2 library. I think I need a kind of filter that will fuse the sensor data to get greater precision.
I know that there is an extended kalman filter approach to simultaneous localization and mapping. I'm curious if there is a SLAM algorithm that exploits the ensemble kalman filter. A citation would be great, if at all possible.
I am tuning PID for quadcopter, the problem i have is that with different base Throttle, i seems that i have to adjust different PID gains in order for the quadcopter to balance!
I am planning to buy an ESC for my tricopter setup. What is the purpose of programming an ESC? I am cost effective and is it really necessary that I should necessarily buy a programming card to program my ESC for my model?
I am looking for a 12V Dual Motor Controller that can supply at least 5A per channel for two 12V motors, and that can be used with an arduino. Do you know of any product with those specs? Thanks
I'm building a control system with a Parrot AR 2.0 drone where I have access to thrust controls for up/down (z), left/right (y), forward/backwards (x), turn left and turn right (yaw) through a Ruby library on my computer. The goal of the system is to keep the drone a particular distance from and parallel to a wall while moving in the up/down and left/right directions. We have added two sonar distance sensors to the left and right forward props. The main problem I am having is figuring out how the two distance sensors equal a yaw reading (ψ) so I can feed that into the PID and then take action on thrust to the turn left or right for correction. Maybe just getting some help with the conversion from two distances to the yaw angle would be a big help, but any thoughts on the PID are greatly appreciated too since it is my first time working with it.
I have an RSL Line Sensor which is designed to distinguish black and white lines. It detects white surface and gives me digital 1 as output, with 0 in case of black, but the surface needs to be close to it. As it uses infra-red-sensors, I wanted to use this sensor as a proximity sensor, to tell me if there is a white surface near it. Is it possible to do this? I think the only problem here is that we need to increase it's range of giving 1. Currently, it gives 1 only when white surface is too close to the sensors. I want 1 even if the white surface is there at a bit more distance. Also there is an adjustable screw there to adjust something, under which POT is written. I am working with an Arduino.
How to compute the angular and linear velocities Quaternions? I am new to this area and although I have studied the algebra I am unable to understand how to compute the velocities.
I'm currently building a hexapod bot, composed with an Arduino Mega board and an USB SSC-32 (from Lynxmotion). But now I want add a PS3 wireless controller to move my hexapod, I have made some search but nothing realy interesting. Maybe the Servoshock module but it seems works only with the ServoshockShield, a kind of Arduino card with Servo output. Can I use the ServoShock module alone ? Can I connect it with Rx/Tx port of the Arduino Mega board ? Do you have other solution for me ? Board with documentation and sources codes ? Thank you all
I am designing a badminton robot but i am very confused about mechanisms needed for a badminton robot and various calculations needed for millisecond response.I am also confused about calculations needed about the forces needed and efficient angles needed for hitting the shuttlecock.Please suggest me some ideas or suggestions needed for construction of badminton robot.
I have to use kinect for an application. However, the final work must be mobile: it means no computer. Consequently, I thought using a microcontroller to handle data from kinect. But is it possible? My job is mesuring some points of a body (axis X, Y, Z) and get back these coordinates. I don't know if I'm enough accurate.
I have made a RC robot from a wheelchair and I'm planning to attach a snow plow. I'm wondering if there is any mechanism that would be able to lift the plow when reversing. I have only 2 channel transmitter so I can't control the plow's movement through it so I was thinking of some mechanical lift that triggers when reversing. Do you guys know about something I could use for it? Thanks.
I have a big problem trying to stabilize a quadrotor with a PD controller. The model and the program has been written in C++ and the model dynamic has been taken from this source in internet: Well, in my code I wrote the model like in the eq. system ( see eq. 3.30 on page 21): /* Calculate the acceleration about all 6 axis */ body_pos_current_.x_dot_2 = ( thrust_.total / masse_ ) * ( sin( body_ang_current_.theta ) * cos( body_ang_current_.phi ) * cos( body_ang_current_.psi ) + sin( body_ang_current_.psi ) * cos( body_ang_current_.phi ) ); body_pos_current_.y_dot_2 = ( thrust_.total / masse_ ) * ( sin( body_ang_current_.theta ) * sin( body_ang_current_.psi ) * cos( body_ang_current_.phi ) - cos( body_ang_current_.psi ) * sin( body_ang_current_.phi ) * cos( body_ang_current_.psi ) ); body_pos_current_.z_dot_2 = ( thrust_.total / masse_ ) * ( cos( body_ang_current_.theta ) * cos( body_ang_current_.phi ) ) - 9.81; body_ang_current_.phi_dot_2 = ( torque_.phi / Jxx_ ); body_ang_current_.theta_dot_2 = ( torque_.theta / Jyy_ ); body_ang_current_.psi_dot_2 = ( torque_.psi / Jzz_ ); where body_ang_current.<angle> and body_pos_current_.<position> are structures defined in a class to store position, velocities and accelerations of the model given the 4 motor velocities about all 3 axis. $$ \large \cases{ \ddot X = ( \sin{\psi} \sin{\phi} + \cos{\psi} \sin{\theta} \cos{\phi}) \frac{U_1}{m} \cr \ddot Y = (-\cos{\psi} \sin{\phi} + \sin{\psi} \sin{\theta} \cos{\phi}) \frac{U_1}{m} \cr \ddot Z = (-g + (\cos{\theta} \cos{\phi}) \frac{U_1}{m} \cr \dot p = \frac{I_{YY} - I_{ZZ}}{I_{XX}}qr - \frac{J_{TP}}{I_{XX}} q \Omega + \frac{U_2}{I_{XX}} \cr \dot q = \frac{I_{ZZ} - I_{XX}}{I_{YY}}pr - \frac{J_{TP}}{I_{YY}} p \Omega + \frac{U_3}{I_{YY}} \cr \dot r = \frac{I_{XX} - I_{YY}}{I_{ZZ}}pq - \frac{U_4}{I_{ZZ}} } $$ Once I get the accelerations above I m going to integrate them to get velocities and positions as well: /* Get position and velocities from accelerations */ body_pos_current_.x_dot = body_pos_current_.x_dot_2 * real_duration + body_pos_previous_.x_dot; body_pos_current_.y_dot = body_pos_current_.y_dot_2 * real_duration + body_pos_previous_.y_dot; body_pos_current_.z_dot = body_pos_current_.z_dot_2 * real_duration + body_pos_previous_.z_dot; body_ang_current_.phi_dot = body_ang_current_.phi_dot_2 * real_duration + body_ang_previous_.phi_dot; body_ang_current_.theta_dot = body_ang_current_.theta_dot_2 * real_duration + body_ang_previous_.theta_dot; body_ang_current_.psi_dot = body_ang_current_.psi_dot_2 * real_duration + body_ang_previous_.psi_dot; body_pos_current_.x = 0.5 * body_pos_current_.x_dot_2 * pow( real_duration, 2 ) + ( body_pos_previous_.x_dot * real_duration ) + body_pos_previous_.x; body_pos_current_.y = 0.5 * body_pos_current_.y_dot_2 * pow( real_duration, 2 ) + ( body_pos_previous_.y_dot * real_duration ) + body_pos_previous_.y; body_pos_current_.z = 0.5 * body_pos_current_.z_dot_2 * pow( real_duration, 2 ) + ( body_pos_previous_.z_dot * real_duration ) + body_pos_previous_.z; body_ang_current_.phi = 0.5 * body_ang_current_.phi_dot_2 * pow( real_duration, 2 ) + ( body_ang_previous_.phi_dot * real_duration ) + body_ang_previous_.phi; body_ang_current_.theta = 0.5 * body_ang_current_.theta_dot_2 * pow( real_duration, 2 ) + ( body_ang_previous_.theta_dot * real_duration ) + body_ang_previous_.theta; body_ang_current_.psi = 0.5 * body_ang_current_.psi_dot_2 * pow( real_duration, 2 ) + ( body_ang_previous_.psi_dot * real_duration ) + body_ang_previous_.psi; /* Copy the new value into the previous one (for the next loop) */ body_pos_previous_.x = body_pos_current_.x; body_pos_previous_.y = body_pos_current_.y; body_pos_previous_.z = body_pos_current_.z; body_pos_previous_.x_dot = body_pos_current_.x_dot; body_pos_previous_.y_dot = body_pos_current_.y_dot; body_pos_previous_.z_dot = body_pos_current_.z_dot; body_ang_previous_.phi = body_ang_current_.phi; body_ang_previous_.theta = body_ang_current_.theta; body_ang_previous_.psi = body_ang_current_.psi; body_ang_previous_.phi_dot = body_ang_current_.phi_dot; body_ang_previous_.theta_dot = body_ang_current_.theta_dot; body_ang_previous_.psi_dot = body_ang_current_.psi_dot; The model seems to work well but, as like reported in many papers, is very unstable and needs some controls. The first approach for me was to create a controller (PD) to keep the height constant without moving the quadcopter, but just putting a value (for example 3 meter) and see how it reacts. Here the small code I tried: /* PD Controller */ double e = ( 3.0 - body_pos_current_.z ); // 3.0 is just a try value!!! thrust_.esum = thrust_.esum + e; thrust_.total = 1.3 * e + 0.2 * real_duration * thrust_.esum; The problem, as you can see here in this video, is that the copter starts falling down into the ground and not reaching the desired altitude (3.0 meters). Then it comes back again again like a spring, which is not damped. I tried already many different value for the PD controller but it seems that it doesn't affect the dynamic of the model. Another strange thing is that it goes always to a negative point under the ground, even if I change the desired height (negative or positive). What s wrong in my code? Could you me please point to some documents or code which is understandable and well documented to start? Thanks EDIT: Many thanks to your suggestion. Hi was really surprise to know, that my code had lots of potential problems and was not very efficient. So I elaborate the code as your explanation and I implementers a RK4 for the integration. After I ve read those articles: here and here I got an idea about RK and its vantage to use it in simulations and graphics PC. As an example I rewrote again the whole code: /* Calculate the acceleration about all 6 axis */ pos_.dVel.x = ( ( thrust_.total / masse_ ) * ( -sin( body_position_.angle.theta ) * cos( body_position_.angle.phi ) * cos( body_position_.angle.psi ) - sin( body_position_.angle.phi ) * sin( body_position_.angle.psi ) ) ); pos_.dVel.y = ( ( thrust_.total / masse_ ) * ( sin( body_position_.angle.phi ) * cos( body_position_.angle.psi ) - cos( body_position_.angle.phi ) * sin( body_position_.angle.theta ) * sin( body_position_.angle.psi ) ) ); pos_.dVel.z = ( ( thrust_.total / masse_ ) * ( -cos( body_position_.angle.phi ) * cos( body_position_.angle.theta ) ) - 9.81 ); pos_.dOmega.phi = ( torque_.phi / Jxx_ ); pos_.dOmega.theta = ( torque_.theta / Jyy_ ); pos_.dOmega.psi = ( torque_.psi / Jzz_ ); /* Get position and velocities from accelerations */ body_position_ = RKIntegrate( body_position_, real_duration ); which is much more clear and easy to debug. Here some useful functions I implemented: QuadrotorController::State QuadrotorController::evaluate( const State &initial, const Derivative &d, double dt ) { State output; output.position.x = initial.position.x + d.dPos.x * dt; output.position.y = initial.position.y + d.dPos.y * dt; output.position.z = initial.position.z + d.dPos.z * dt; output.velocity.x = initial.velocity.x + d.dVel.x * dt; output.velocity.y = initial.velocity.y + d.dVel.y * dt; output.velocity.z = initial.velocity.z + d.dVel.z * dt; output.angle.phi = initial.angle.phi + d.dAngle.phi * dt; output.angle.theta = initial.angle.theta + d.dAngle.theta * dt; output.angle.psi = initial.angle.psi + d.dAngle.psi * dt; output.omega.phi = initial.omega.phi + d.dOmega.phi * dt; output.omega.theta = initial.omega.theta + d.dOmega.theta * dt; output.omega.psi = initial.omega.psi + d.dOmega.psi * dt; return output; }; QuadrotorController::Derivative QuadrotorController::sampleDerivative( double dt, const State &sampleState ) { Derivative output; output.dPos = sampleState.velocity; output.dVel.x = pos_.dVel.x; output.dVel.y = pos_.dVel.y; output.dVel.z = pos_.dVel.z; output.dAngle = sampleState.omega; output.dOmega.phi = pos_.dOmega.phi; output.dOmega.theta = pos_.dOmega.theta; output.dOmega.psi = pos_.dOmega.psi; return output; }; QuadrotorController::State QuadrotorController::RKIntegrate( const State &state, double dt ) { const double C1 = 0.0f; const double C2 = 0.5f, A21 = 0.5f; const double C3 = 0.5f, A31 = 0.0f, A32 = 0.5f; const double C4 = 1.0f, A41 = 0.0f, A42 = 0.0f, A43 = 1.0f; const double B1 = 1.0f/6.0f, B2 = 1.0f/3.0f, B3 = 1.0f/3.0f, B4 = 1.0f/6.0f; Derivative k1 = sampleDerivative( 0.0f, state ); Derivative k2 = sampleDerivative( C2 * dt, evaluate( state, k1 * A21, dt ) ); Derivative k3 = sampleDerivative( C3 * dt, evaluate( state, k1 * A31 + k2 * A32, dt ) ); Derivative k4 = sampleDerivative( C4 * dt, evaluate( state, k1 * A41 + k2 * A42 + k3 * A43, dt ) ); const Derivative derivativeSum = k1 * B1 + k2 * B2 + k3 * B3 + k4 * B4; return evaluate( state, derivativeSum, dt ); } Now I m really lost because...because the simulated qudrotor has the same behavior as before. Nevertheless I ve implemented the same PD algorithm as discussed in the paper, it stabilize on Z (height) but it get really crazy due to unstable behavior. So... I dunno what is wrong in my code and my implementation. And above all I cannot find any source in internet with a good self explaned dynamic model for a quadrotor. Regards
While reading the paper "Multirotor Aerial Vehicles: Modeling, Estimation, and Control of Quadrotor" by Mahony, Kumar and Corke, I stumbled across the following equations for a non-linear attitude observer, which I would like to implement, but I believe there is something wrong. $\dot{\hat{R}} := \hat{R} \left( \Omega_{IMU} - \hat{b} \right)_\times - \alpha \\ \dot{\hat{b}} := k_b \alpha \\ \alpha := \left( \frac{k_a}{g^2}((\hat{R}^T \vec z) \times a_{IMU}) + \frac{k_m}{|^Am|^2} ((\hat{R}^T {^Am}) \times m_{IMU}) \right)_\times + k_E \mathbb{P}_{so(3)} (\hat{R} R_E^T)$ Where $\hat{R}$ and $\hat{b}$ are etimates of orientation and gyroscope bias, $\Omega_{IMU}, a_{IMU}, m_{IMU}, R_E^T$ are measurements and $k_X$ are scalar gains, which may be set to 0 for measurements that are not evailable. Now $\dot{\hat{R}}$ and $\alpha$ need to be matrices $\in \mathbb{R}^{3\times 3}$ due to their definitions. $\hat{b}$ and thus $\dot{\hat{b}}$ need to be vectors $\in \mathbb{R}^3$. But then what is the correct version of the second equation $\dot{\hat{b}} := k_b \alpha$?
I'm trying to make a Quadcopter from scratch, I have a fair amount of experience with adruinos, and I'm trying to understand how to necessary systems work, and I can't seem to figure out what PID means, is it a method of regulating pitch and roll? like a stabilizer? I think from what I've read that its a system that detects orientation of the craft and tries to correct it
I'd like to slice and dice floor tile into pieces so I can arrange it in geometric patterns. I have CAD designs for the parts. Would any consumer grade CNC machine be capable of doing the job?
I am trying to calculate likelihood of laser scan($Z$) at give pose($x$) with known map ($m$) using beam based model $P\left(z_t|x_t,m \right)=\prod_{i=1}^{n}P'\left(z_i|x_t,m \right)$ My scan has 360 rays i.e $n=360$, When i calculate $P\left(z_t|x_t,m \right)$ it becomes zero as multiplication all propabilities $<1.$ In ROS amcl they are using ad-hoc which works better like $P\left(z_t|x_t,m \right)+=\sum_{i=1}^{n}P'\left(z_i|x_t,m \right)*P'\left(z_i|x_t,m \right)*P'\left(z_i|x_t,m \right)$ later they normalise it with number of particle to get weight of each particle. My query is how to get probability normalised and not zero with single calculation (i.e image in case of single particle) Thanks.
I would like to create an Infinite-horizon, continuous-time LQR with a cost functional defined as $$J = \int_{0}^\infty \left( e^T Q e + u^T R u \right) dt$$ where e is the states' error $x-x_d$, but I have trouble concluding to the appropriate Ricatti equation since $x_d$ is a function of time therefore leading to a term of $\dot x_d$ . Is this problem solvable? Any ideas?
I purchased a Pololu Dual MC33926 Motor Driver Shield for Arduino, and for some reason I cannot read current from the motor controller. On the Serial.println() it just prints weird data (garbage), and when I use ROS (Robot Operating System) I only see -0.0 (minus zero) value for both motors. All I've done is plug the shield on my Arduino UNO R3 model, and run the demo that comes with the sample library -- http://github.com/pololu/dual-mc33926-motor-shield . How can I fix this issue?
Some friends and I are interested in working on a robot. I know little to nothing about robotics, but do have a lot of experience with programming. Before we start, I am hoping that I can find some development kits or libraries that will help aid the goals of the robot. Which are: Robot needs to move from point A to point B. While moving, it needs to detect rocks (approx. 1 foot diameter) on ground. It needs to detect rocks that are big enough to stop it, turn away from them, and proceed. In theory, we will want to detect the kinect's angle via the accelerometers, and use that data to obtain Cartesian coordinates of the ground from the kinect's sensors. Later, we will want a way to assemble a 'map' in the robot's memory so that it can find better paths from A to B. Right now we aren't concerned with the motors on the robot - only the vision element. Ie, I am not really interested in software that interfaces with the motors of the robot, only only something that interfaces with the kinect.