instruction
stringlengths
40
28.9k
Over the last month, I saw many robots that don't have any real purpose, which made me ask myself: "Does this have any value?" I saw dancing robot on CES, advanced lego based robots and also robots combined for very limited purpose. I saw ten year old children playing with robots, and competitions for them. Someone has told me that this is just for education and logic spreading. In other cases, there were arguments like, "this is for informing people that everything is going forwards". I know that people will buy robotic vacuum cleaners because they think that they'll save some time, but these robotic cleaners are not very reliable and I see it only as marketing. Do these things (children's education, dancing robots, and other instances of selling a pig in a poke) have any value in terms of robotics, and are really advancing the field as manufacturers say?
Microhard Systems currently sells several types of 900MHz modems, which are mostly used in robotics and SCADA applications. One of their product lines, the 910 series (MHX910, n910, Spectra 910), is obsolete and no longer sold. However, some older equipment is built with OEM versions of the 910 series soldered in place. Microhard currently sells a 920 series (MHX920, n920, Spectra 920) that shares many of the specs with the 910 series, but cannot establish a radio link with a 910 series modem due to differences in encryption and hopping pattern. Therefore, if you want to make new equipment communicate with equipment using the 910 series, your options are: De-solder the old 910 modem and replace it with the footprint-compatible 920 modem, or Reconfigure a 920 series modem to communicate with the 910 series modem. Option 1 is undesirable, since I don't have access to the firmware on the older equipment in question. Does anyone know how to accomplish option 2?
I also asked this question on ROS Answers, but it's not getting much interest there. Currently the EtherCAT package in ROS uses the slaves' Product IDs to identify the devices, and load the correct drivers. This works great when all of the devices are manufactured by a single vendor, but are there any plans to prevent Product ID collisions when multiple vendors make ROS compatible EtherCAT devices? We manufacture our own EtherCAT devices, and are just using some large values for Product ID, just hoping that these don't collide with anyone else's. Ideally, ROS would concatenate the vendor and product IDs into a single 64-bit value, and use that to identify the correct driver.
I'm building a quadrupedal robot that will learn how to walk. From the responses I got from asking if its possible to run a NN on a micro controller I realised I needed to think of a clever system that wouldn't take 1000 years to be effective and would still be able to demonstrate onboard learning. I've designed a system but I'm not sure how effective it will be. Firstly I hardcode 5-20 positions for the legs. I set up a (simple) neural network where each node is a different set of positions for the legs, which I will write. The robot moves from one node to another and the weight of the joint is determined by how far forward the robot moves. Eventually there will be strong connections between the best nodes/positions and the robot will have found a pattern of moves that are most successful in walking. How effective would this be in learning to walk? Note: instead of positions I could write short gaits and the process would work out which sets work best when combined.
We are currently designing a mobile robot + mounted arm with multiple controlled degrees of freedom and sensors. I am considering an architecture in two parts: A set of realtime controllers (either Raspeberry Pis running an RTOS such as Xenomai or bare metal microcontrollers) to control the arm motors and encoders. Let us call these machines RTx, with x=1,2,3… depending on the number of microcontrollers. This control loop will run at 200Hz. A powerful vanilla linux machine running ROS to compute SLAM, mocap, and execute high-level logic (decide the robot’s task and compute the motors' desired position and speed). This control loop will run at 30Hz. I know my framework needs to be scalable to account for more motors, more sensors, more PCs (eg. for external mocap). My main problem is to decide how to have the different RTx communicate with PC1. I have looked at papers related to robots architecture (e.g. HRP2), most often they describe the high level control architecture but I have yet to find information on how to have the low level communicate with the high level and in a scalable way. Did I miss something? In order to connect the fast RT machines ensuring the motor control with PC1, I have considered TCP/IP, CAN and UART: TCP/IP: not deterministic but easy to put in place. Is non determinism a real issue (as it will only be used at at slow speed 30Hz anyways)? CAN: slow, very reliable, targeted to cars ( have seen there are some exemples using CAN with robots but it looked exotic) UART: if I had only had one RT machine for motor control I would have considered UART but I guess this port does not scale well with many RTx Is TCP/IP really a no go because of its non-deterministic characteristics? It is so easy to use… At the moment no solution really seems obvious to me. And as I can find no serious robot example using a specific reliable and scalable solution, I do not feel confident to make a choice. Does anyone have a clear view on this point or literature to point to? Are there typical or mainstream communication solutions used on robots?
I'm running out of digital ports, and have no sensors that fit the definition 'analog'. Would it be possible to run a touch sensor, a quadrature encoder, or an ultrasonic sensor on an analog port? I'm thinking not, but I didn't run across anything that said otherwise.
I have a small motorized vehicle with gears as wheels running up and down a track made of gear racks. How can this robot know when it has run half the track? And what's the best method to keep it from running off its track at the end and then return to start. The robot is carrying water, not exactly the same amount each time, so it will not weigh the same. Therefore it might not be the same amount of steps in the stepper-motor each time. Here I have some ideas that might work, though I am a beginner, and don't know what's the best solution. GPS tracking it (overkill on such a small scale?) Some kind of distance measurer Have a knob it will hit at the middle of the track, telling program to delay for a given time Track amount of steps the motor has performed (won't be as accurate?)
I'm working on a rather low budget project, and need some way to control four or more motors using one Arduino. I've looked at motor shields a little, but I have a shield on top of it already, It does have female input on the top though, so a motor shield may work. Any suggestions?
We have an electric wheel chair, and are looking to add a rotary encoder to each wheel. We don't want to hack the motor itself, so want to add the encoder without harming the motor-to-wheel connection. We will be using an arduino to read the signal. Does anyone have any experience adding rotary encoders to already assembled wheel assemblies?
I'm working with a lifesize (~130cm) humanoid robot (Hubo+) and looking for a way to easily program new motions and gestures into him. Obviously, I could write my own tool, but I am looking for a solution that can leverage existing tools or standards for robot motion. My first thought was trying to use animation software like Blender or Maya, and writing a script to extract the joint angles at keyframes. However, few robotics researchers are probably proficient with Maya. (I know I'm not!) Is there already some kind of 3D posing tool for robotics that is a standard? The only things I have seen so far that comes close is the Pose Utility in RoboPlus and Choregraphe for the Nao, but both programs seem limited to particular robots and don't appear to be extendable to Hubo. So my questions are: Are there standard file formats for robot motion? Not 2D wheeled robot motion. Arm and leg motion! Something equivalent to the .bvh file format used in motion capture. Do you know of any WYSIWYGish tool for creating robot motion using keyframes and inverse kinematics?
I'm currently designing a robotic arm with 6-DOF, and my goal is to be able to give setpoints for 3d position, velocity and orientation ($x,y,z,\dot{x},\dot{y},\dot{z},\theta,\alpha,\gamma$). I only had feedback-control for SISO systems so far in College, so, taking the learning curve of multivariable control in consideration, should I approach this problem trying to model the system as a MIMO or multiple SISOs? If possible please mention possible disadvantages and advantages in each strategy.
I have been trying to write code to connect a HiTechnic prototype board to my lego brick. Although I am using MSRDS studio, that isn't the issue; reading and writing to the serial port that the device is connected to works fine. Where I am lacking is that I don't understand the data is that is being sent and received. It goes out and comes back in the form of a byte array. For example: [128] [15] [0] [2] [16] [2] [8] Is this byte array converted from hex? What is this response telling me? Obviously I am a total newbie at this, I can program but I don't really understand electronics and I am trying to make that connection between what I have read about how an I2C controller works and what is happening when I send and receive data over a serial port.
I want to injection-mold several thousand of a part that fits in a 6" x 6" x 2" bed. I would like to be able to use only tooling that I can make myself, so I can rapidly iterate on the tooling as production problems are discovered. I know that typical injection-mold "hard tooling" is created using electrical discharge machining, which requires first CNCing a carbon positive and then using that as an electrode to spark-burn out a negative mold from hard steel. However, I do not have the equipment for EDM. Instead, I would prefer to directly CNC the negative mold. I know that a soft enough steel to be CNCed will not last very long as an injection mold, but like I said, my run size is tiny, and I am ok with making a new mold every 500 units or so if necessary. I am open to buying an endmill that is diamond-tipped, to work with harder steel, but then the limitation will probably be how much torque the CNC can produce on the endmill. What are some recommendations or links to helpful resources? In particular, what is a good CNC with enough torque, and what blend of steel should I use? Thanks!
I am working on building my own quadcopter from scratch. I noticed that many solutions available online use arduino, but I am not a fan of arduino. So my questions are: what microcontrollers should be used, what are the crucial features of those microcontrollers etc. I would like to build it from total scratch. I was thinking about PIC microcontrollers. Also what should be used for ESC, since I would build that from scratch too. Summing it all up: 4 ESCs Gyro,acceloremeter,gps transceiver which is about 8 slaves and one master microcontroller.
For avoiding obstacles during 2D robot navigation what is the best position/angle to place the sonar sensors? How many should there be? I would like to know if there is some theory or examples for the problem of placing. I realize that it depends on the way that the robot moves and its geometry, but I am searching for general answers.
I am reading research papers about robotics and many of them follow the same pattern: some construction is established kinematical formulas are read from the mechanical structure the state space is analysed (e.g. how far the robot can reach, what the maximum speed can be, what is left underspecified and how to handle such mathematically incorrect systems and so on) Is there some tool or software product that can receive (as input) the mechanical structure and then output the kinematical formulas? Preferably, it would provide some kind of plots, analysis, suggestions for optimal design parameters (e.g. length, angles of the sturcture, optimum parameters of motors and so on). Does this exist?
Using a depth sensing camera like Kinect, I would like to retrieve the position of an predetermined object (e.g. a cup, fork etc so that I would ultimately be able to grab the object). What would be a way to achieve this?
I am learning about I2C on the Arduino. I was looking at a sample program to scan for I2C devices and saw this: // This sketch tests the standard 7-bit addresses // from 0 to 127. Devices with higher bit address // might not be seen properly. With the following code. for(address = 0; address <= 127; address++ ) { // The i2c_scanner uses the return value of // the Write.endTransmisstion to see if // a device did acknowledge to the address. Wire.beginTransmission(address); error = Wire.endTransmission(); if (error == 0) { Serial.print("I2C device found at address 0x"); if (address<16) Serial.print("0"); Serial.print(address,HEX); Serial.println(" !"); } } As far as I understand it, a bit is just 1. So, why how do 7 bits loop from 0 - 127?
I'm studying various optimal control methods (and implements them in Matlab), and as test case I choose (for now) a simple pendulum (fixed to the ground), which I want to control to the upper position. I managed to control it using "simple" feedback method (swing-up based on energy control + LQR stabilization for the upper position), and the state trajectory is show in figure (I forgot the axis description: x is theta, y is theta dot. Now I want to try a "full" optimal control method, starting with an iterative LQR method (which I found implemented here http://homes.cs.washington.edu/~todorov/software/ilqg_det.m) The method requires one dynamic function and one cost function (x = [theta; theta_dot], u is the motor torque (one motor only)): function [xdot, xdot_x, xdot_u] = ilqr_fnDyn(x, u) xdot = [x(2); -g/l * sin(x(1)) - d/(m*l^2)* x(2) + 1/(m*l^2) * u]; if nargout > 1 xdot_x = [ 0, 1; -g/l*cos(x(1)), -d/(m*l^2)]; xdot_u = [0; 1/(m*l^2)]; end end function [l, l_x, l_xx, l_u, l_uu, l_ux] = ilqr_fnCost(x, u, t) %trying J = x_f' Qf x_f + int(dt*[ u^2 ]) Qf = 10000000 * eye(2); R = 1; wt = 1; x_diff = [wrapToPi(x(1) - reference(1)); x(2)-reference(2)]; if isnan(t) l = x_diff'* Qf * x_diff; else l = u'*R*u; end if nargout > 1 l_x = zeros(2,1); l_xx = zeros(2,2); l_u = 2*R*u; l_uu = 2 * R; l_ux = zeros(1,2); if isnan(t) l_x = Qf * x_diff; l_xx = Qf; end end end Some info on the pendulum: the origin of my system is where the pendulum is fixed to the ground. The angle theta is zero in the stable position (and pi in the unstable/goal position). m is the bob mass, l is the rod length, d is a damping factor (for simplicity I put m=1, l=1, d=0.3) My cost is simple: penalize the control + the final error. This is how I call the ilqr function tspan = [0 10]; dt = 0.01; steps = floor(tspan(2)/dt); x0 = [pi/4; 0]; umin = -3; umax = 3; [x_, u_, L, J_opt ] = ilqg_det(@ilqr_fnDyn, @ilqr_fnCost, dt, steps, x0, 0, umin, umax); This is the output Time From 0 to 10. Initial conditions: (0.785398,0.000000). Goal: (-3.141593,0.000000) Length: 1.000000, mass: 1.000000, damping :0.300000 Using Iterative LQR control Iterations = 5; Cost = 88230673.8003 the nominal trajectory (that is the optimal trajectory the control finds) is The control is "off"... it doesn't even try to reach the goal... What am I doing wrong? (the algorithm from Todorov seems to work.. at least with his examples)
I have the following chassis along with an Arduino and a motor shield. I'm in the process of developing a tracking mechanism for use with differential drive. Normally, a photo reflector can be placed adjacent to the wheel that will reflect when each spoke passes through therefore allowing code to be written that will accurately measure each wheels position. The problem I have is that you cannot see the wheels from inside the chassis, only small holes for the driveshaft. Placing sensors on the outside would look ridiculous and a wall crash would cause havoc. Would I be able to use a photo reflector on the gears (as shown) if I accurately placed it to count each spoke on the gear itself? I'm a bit hesitant though because even a small bump could misalign the sensor - again causing havoc. So does any one have an idea on how to track the wheel movements?
Is there a way of initializing a Kalman filter using a population of particles that belong to the same "cluster"? How can you determine a good estimate for the mean value (compute weighted average ?) and the covariance matrix ? Each particle is represented as $[ x , y , θ , weight]$.
I need to simulate a stream of vehicles, such as on an assembly line. Automatons are performing operations on the vehicles when they come within reach. The automatons do not keep track of the individual vehicles, they simply collect data. We need to choose a method of matching the data gathered by each automaton with the vehicle it belongs to. For example, we could guess the identity of a vehicle using its timing when arriving in the operation range (sensors) of an automaton. I have to check the possible problems we will face, so I would like a little (hopefully simple) video/simulation tool that I could play with. vehicles could be symbolized has moving black squares automatons/sensors could be static points or circles. it should be possible to change the time interval between two vehicles, and their speed, and add some random delays. What kind of software should I search for, or where should I look? Should I consider to developing it from scratch?
I am creating a CNC machine on a budget, using old motors out of printers/scanners/etc. I am limited to about 650mA for the whole system, so my fear is that when the cutting bit touches the material, the stepper might be moving too quickly and won't have enough torque. This would mean it will become one rotation behind, which could really mess up a CNC project. Detecting when the motor "misses" a step would allow me to readjust the motor speed until it reaches a balance between working quickly and having adequate torque. How can I achieve this?
I am very new to robotic design and I need to determine what parts I will need to assemble an arm joint. The joint will contain one timing belt pulley which a remote motor will be turning, a forearm that the pulley will be rotating and an upper-arm piece that will actually be two parallel arms that will grip the pulley on top and bottom in order to brace the pulley from off axis torque from the timing belt. I am kind of at a lost as to how to mount all of these together. I would like to mount the forearm directly to the pulley and then the two parallel arms (comprising the upper-arm) sandwich the top of the pulley and the lower part of the forearm. This would be attached using a turn table. Any ideas on how a shaft would mount to these? Or how to attach the pulley to the arms themselves? Any kind of direction or links would be greatly appreciated, I don't even know the names of the parts I would be looking for. In this ASCII art model the dashed lines (-) are the arms. The arm on the left is the forearm and the two arms on the right are the two parallel parts of the upper arm. The stars are the belt and the bars (||) are the pulleys at the elbow |E| and shoulder |S|. ----------------- |E|***********|S| ----------------- ----------------- I am thinking of mounting the pulley to the left arm directly (a bushing?) and then maybe using turntables to mount the pulley to the top arm and another turn table to mount the left arm to the bottom arm. Here is a picture of the design to help you visualize:
I'm working on a basic airplane flight stabilization system, as the precursor to a full autopilot system. I'm using a salvaged Wii Motion Plus and Nunchuk to create a 6DOF IMU. The first goal is to keep the wings level, then mix in the users commands. Am I correct in saying that this would not require a gyro, just a 3 (2?) axis accelerometer, to detect pitch and roll, then adjust the ailerons and elevator to compensate? Secondly, if we extend my design goal from "keeping the wings level" to "flying in a straight line" (obviously two different things, given wind and turbulence), does the gyro become necessary, insofar as this can be accomplished without GPS guidance? I've tried integrating over the gyro values to get roll, pitch & yaw from that, however (as evidenced by this question), I'm at a level in my knowledge on the topic where I'd prefer simpler mathematics in my code. Thanks for any help!
I have a quadcopter robot that has a KINECT on it and i want to do 3D mapping with it. Is KINECT reliable on a moving robot (i.e., can it give me stable images and maps with this movement)? Is there an SDK for producing 3D maps from KINECT data? Will SLAM algorithms work? Is the arduino board on the copter (ATmega 2560) powerful enough to handle this?
Before I start asking you for help let you know that I am newbie in electronic field. All I want to know is the principle of wheel rotation (left-right) from remote car gadget. I am not talking about changing the spin rotation of DC motor (up,down buttons from remote), I am asking about left and right movement of wheel. I know that spin change depends on polarity of DC motor, so changing polarity changes spin, but what is the principle of changing the left and right positions of front wheels.
I have 3D printers at my school, but unfortunately they are not super high quality. I want to try 3D printing a model I made on google sketchup, but I would like for it to be fairly accurate. What measures can I take to prevent error in the model? I understand that I need to export the file as an STL; is there anything I can do to the model before hand to ensure accuracy? What can I do to calibrate a 3D printer for best results?
I recently start a project to measure the force on a bathroom grab bar. The force/load is applied by the person who need to the grab bar for assistant. What I want to measure is the load against the wall and do the the real-time monitoring of the load for further analysis to improve the design. I am not quite sure about what kind of sensor would be suitable to do the measurement. I am looking at different load cells but cannot get the idea how to mount commercial load cells to do the measurement. What I am trying right now is using strain gauge to measure the strain near the end of the bar(wall side) and roughly calculate the load. I think (might be wrong) there may exists some kind of force/load sensors that can clamp on the bar to do the measurement. Any sensor types/models or suggestion are welcome. I also posted this question to EE forum https://electronics.stackexchange.com/questions/57197/how-to-measure-force-that-applied-on-grab-bar
I have been experimenting with different fitness functions for my Webots robot simulation (in short: I'm using genetic algorithm to evolve interesting behaviour). The idea I have now is to reward/punish Aibo based on its speed of movement. The movement is performed by setting new joint position, and currently it results in jerky random movements. I have been looking at the nodes available in Webots, but apart from GPS node (which is not available in Aibo) I couldn't find anything relevant. What I want to achieve is to measure the distance from previous location to current location after each movement. How can I do this?
Is there a Matlab toolbox available to use Sick lasers in Windows? I found one toolbox for Matlab in GNU/Linux. Is there another way to use Sick laser via Matlab in Windows?
Does anyone know if this is possible? It's just an i2c device right? I mean you would have to cut the cable and make it so you could plug into the pins on the Arduino but you should just be able to use the wire library and say something like. Wire.beginTransmission(0x10); the NXT hardware developers kit tells you what pins are which http://mindstorms.lego.com/en-us/support/files/default.aspx Thanks EDIT. Turns out this is very possible. The main problem was that HiTechnic says the address is 0x10 and it is actually 0x08 but here is a short sketch that reads and prints some into about the device, i.e. the manufacturer and version. #include <Wire.h> #define ADDRESS 0x08 void setup() { Wire.begin(); Serial.begin(9600); } void loop() { readCharData(0, 7); Serial.println(); readCharData(8, 8); Serial.println(); readCharData(16, 8); Serial.println(); Serial.println("-----------------------------"); delay(1000); } void readCharData(int startAddress, int bytesToRead) { Wire.beginTransmission(ADDRESS); Wire.write(startAddress); Wire.endTransmission(); Wire.requestFrom(ADDRESS, bytesToRead); while(Wire.available()) { char c = Wire.read(); Serial.print(c); } }
I'm a programmer by trade, and an amateur aerospace nut, with some degree-level training in both fields. I'm working on a UAV project, and while the good people over at DIY Drones have been very helpful, this question is a little less drone-related and a little more general robotics/electronics. Essentially, I'm looking at options for ground stations, and my current rough plan is something like this: I have a PC joystick with a broken sensor in the base, which I plan to dismantle, separate the handle from the base, insert an Arduino Nano into the (mostly hollow) handle and hook it up to all the buttons and the hat thumbstick. Then, where the hole is that used to accept the stem to the base, I fit a bracket that runs horizontally to hold a smallish touchscreen (think Razer's Project Fiona tablet with only one stick), behind which is mounted a Raspberry Pi. The Nano talks to the RPi over USB as a HID input. The RPi will be running some custom software to display telemetry and other data sent down from the UAV. My main question whether that Nano would have enough power to run the XBee that provides the telemetry link without causing lag in the control inputs. It's worth mentioning that the UAV will be doing fly-by-wire moderation, so slight stutters won't result in wobbly flying, but serious interruptions will still be problematic - and annoying. It's also worth mentioning that this will only be used as a simplified "guiding hand" control; there will ALWAYS be a regular remote control available (not least because of EU flight regulations) so this is just for when I don't want to use that. If that Nano won't do, what are my options? My first thought is to get a second Nano and get that to drive the XBee (the RPi has two USB ports after all) but there may well be a better way.
Is it possible to achieve arbitrary precision to the calibration of the extrinsic parameters of a camera or is there a minimum error wich can not be compensated (probably dictated by the camera's resolution)?
I am in the process of building a stereo vision system to be used on a UGV. The system is for a robot that will be used in a competition wherein the robot is teleoperated to find relatively small colored rocks in a large outdoor field. I understand how to calibrate such a system and process the data for a stereo vision system. I do not however know how to select cameras for such a system. What are the best practices for picking cameras for a stereo vision system?
I'm trying to control a higher voltage motor than an arduino can source with a pin, with an arduino. I am trying to hook it up to a transistor. The battery pack is not supposed to be 4.8V, it's 6V, 4 D batteries. Here is the setup: Here is the arduino code I'm trying to run to it: int motorpin = 2; void setup() { pinMode(motorpin, OUTPUT); } void loop() { digitalWrite(motorpin, HIGH); delay(500); digitalWrite(motorpin, LOW); delay(500); } Code gives me no errors, but no motor movement happens. What would make this work? Thanks.
Given workspace constraints, load and task to be done, how do I select the best configuration of my robot? How do I select between a cartesian or Scara robot for instance? How do I select a manipulator? How do I determine how many axes that I need? Most of what I have seen is based on experience, rules of thumb and readily available standard devices, but I would like a more formal answer to quantify my choice. Is there some technique (genetic algorithm?) which describes the task, load, workspace, budget, speed etc. and rates and selects an optimal robot configuration or maybe even multiple configurations? How can I be mathematically ensure I ultimately chose the optimal solution? The only thing I found online was a thesis from 1999 titled Automated Synthesis and Optimization of Robot Configurations: An Evolutionary Approach (pdf, CMU-RI-TR-99-43). It is a synthesis and optimization tool called Darwin2K presented in a thesis written by Chris Leger at CMU. I am surprised no one has updated it or created a tool similar to it. To provide some context for my question, we are developing a robot to assist the elderly with domestic tasks. In this instance, the robot identifies and picks food items from a previously stored and known location. The hand opens the package and place it in the oven. The pick and place locations are fixed and nearby so the robot is stationary.
A 2d laser scanner is mounted on a rotary axis. I wish to determine the transformation matrix from the center of the axis to the center of the scanner, using only the input from the scanner and the angle of rotation. The 2d scanner itself is assumed to be calibrated, it will accurately measure the position of any object inside the plane of the laser, in regards to the scanner origin. The rotary axis is calibrated as well, it will accurately measure the angle of its own movement. The scanner is aligned and mounted close to the center of rotation, but the exact offset is unknown, and may drift over time. Assume it is impractical to measure the position and orientation of the scanner directly. I am looking for a way to determine the exact values for the 6 degrees of offset the scanner may have in relation to the axis, determined solely on the 2d information from the scanner and the rotation angle from the axis. I am mainly interested in the 4 offsets depicted here, since the other two do not matter in regard to generating a consistent 3d point cloud from the input data. By scanning a known calibration object, it should be possible to determine these offsets. What are the mathematical formulas for this? What sort of calibration information is required at a minimum? Is it for example possible to determine all parameters simply by scanning a flat surface, knowing nothing about the surface except that it is flat? (The transformation matrix from rotation axis to world is unknown as well, but that one is trivial to determine once the transformation from axis to camera is known.) Example On the left the camera is placed exactly on the rotational axis. The camera scans a planar object with reference points A B and C. Based on the laser distance measurements and the angle of the axis, this planar object can be reconstructed. On the right, the camera has an unknown offset to the axis. It scans the same object. If the point cloud is constructed without knowing this offset, the planar surface maps to a curved surface. Can I calculate the offset based on the surface curvature? If I know the real-world distances and angles between A, B and C, how can I calculate the camera offsets from that? What would be the minimum number of reference points I need for all 4 offsets?
I've been looking into a Makeblock robotics kit but have found no information on the web that comes from end-users, and one of the main advertised features is not clear to me: The slot threads shown below are straight, while the screw thread that will mate with them is angled. Is there just very little contact between screw thread and rail thread vs. regular screw hole threads? Or would the screw want to rest angled somewhat- and then the head would not be flush with the rim of the rail? Or would the screw deform the aluminum rail if over-torqued? This is a close up picture of the slot with screws:
I'm looking for a GPS tracking device without screen or apps. I just need it to look for the current position of a bus and send it to a server through TCP/IP protocol. This process must be constant so I can have a real-time tracking. The bus already has a wireless access point. What device can be useful? Do I need another piece of hardware to send the coordinates to the server? I have no experience but... can something like an arduino connected to the gps send the data?
I am designing a new platform for outdoor robotics and I need to calculate the power and/or torque that is needed to move the platform. I have calculated that I need about 720 W of total power to move it (360W per motor), but I don't know how to calculate the torque that I need. Is it really just about having the required power and ignoring the torque or is there a way to calculate it easily? Already known parameters of the platform are: Weight of the whole platform: 75 kg. Number of wheels: 4. Number of powered wheels: 4. Diameter of wheels: 30 cm. Number of motors: 2. Wanted speed: 180 RPM (3 m/s). Wanted acceleration: > 0.2 m/s^2
Does anybody know if Kinect Data can be stored directly onto a USB Drive?? I have a Kinect for Windows that i cannot use on Linux(ROS). However what i plan is to mount the Kinect on my robot, store the captured frames on a USB and then un mount the USB ,transfer to Linux and process them on ROS. Is this possible?? Any suggestions.
I'm trying to connect a camera module to my Arduino Mega, connect my Mega to my Android phone (throught BlueTooth or other), and send the live view of the camera to the mobile phone. I saw a video online that showed this for still images -- an image captured by the camera module on the Arduino was sent to Android and the output image was viewed after a couple of seconds (the time to send image by BT). Is this doable with live video instead of image? If yes, please guide me; if no, please suggest some workarounds.
I need to make an omni wheeled robot platform (4 wheels), which should go at a minimum speed of 15 cm/s. I have an idea for the design, but since this is my first time doing something like this I have made a lot of assumptions. I decided to choose the TGY-S4505B servos as my motor system. I intend to attach these servos to FXA308B wheels. Finally, I intend to power my servos with one Turnigy LSD 6.0V 2300mAh Ni-MH Flat Receiver Packs (not sure if LiPo is a better choice). I need to be able to run the servos continuously for roughly 8 minutes. You can ignore the microcontroller and other stuff, relatively speaking they will consume much less power. The robot will have four wheels (thus, four servos). The basic specifications of each servo is: Type: Analog Gear train: Plastic Bearings: Dual Motor Type: Carbon Brushed Weight: 40g (1.41oz) Lead: 30cm Torque: 3.9kg.cm @ 4.8v / 4.8kg.cm @ 6v Speed: 0.13sec 60°@ 4.8v / 0.10 60° @ 6v So based on my battery pack, I will be running the servos at 6V. That gives me a speed of 60 degrees per 0.10 seconds. I plan on modifying these servos for continuous rotation, and connected them directly to the wheel. Since the wheel has a diameter of ~5 cm, it has a circumference of ~15 cm. Based on these specs, it seems to me that my robot can move at roughly 15 cm/0.6 seconds, or 25 cm/s (quite fast actually). I don't intend to run it constantly at that speed, so in the 8 minute run, assume my average speed to be 20 cm/s. Are these assumptions reasonable, and are the calculations correct? I would really appreciate any insight, advice, recommendations, and criticisms you may have.
Do any of the TI ARM SOCs, e.g. OMAP or Da Vinci, have a version with stacked RAM? (e.g. DDR2 or mDDR) For miniature robots like micro drones, it would be really nice to not need to spend board area on an external RAM chip. Thanks!
For my robot, I am using two continuous rotation servos to spin a threaded rod. I am trying to make this project as cheap as possible. Here are the servos that I can find: Servo #1: This is a very cheap option and it has half of the torque I need. Servo #2: This has all of the torque my project requires, but it is much more expensive that two of servo #1. Can I hook up two of servo #1 to each end of the rod and have them move synchronized? I can spare a few extra pins on my microprocessor that I am using; that isn't a issue. I know hooking two together will increase torque, but I don't want 75% of the torque I want in this situation. Also, I don't care if I only have 98% of my torque "goal" with the extra weight (which probably won't happen) but I don't want to, like I said earlier, have 70, 80, 90% of my "target goal" of torque if possible. Any help appreciated. Thanks in advance.
I'm doing robotics research as an undergraduate, and I understand the conceptual math for the most part; however, when it comes to actually implementing code to calculate the forward kinematics for my robot, I am stuck. I'm just not getting the way the book or websites I've found explain it. I would like to calculate the X-Y-Z angles given the link parameters (Denavit-Hartenberg parameters), such as the following: $$\begin{array}{ccc} \bf{i} & \bf{\alpha_i-1} & \bf{a_i-1} & \bf{d_i} & \bf{\theta_i}\\ \\ 1 & 0 & 0 & 0 & \theta_1\\ 2 & -90^{\circ} & 0 & 0 & \theta_2\\ 3 & 0 & a_2 & d_3 & \theta_3\\ 4 & -90^{\circ} & a_3 & d_4 & \theta_4\\ 5 & 90^{\circ} & 0 & 0 & \theta_5\\ 6 & -90^{\circ} & 0 & 0 & \theta_6\\ \end{array}$$ I don't understand how to turn this table of values into the proper transformation matrices needed to get $^0T_N$, the Cartesian position and rotation of the last link. From there, I'm hoping I can figure out the X-Y-Z angle(s) from reading my book, but any help would be appreciated.
I'm new to robot making and just got my first arduino to play around. I want to make a robot that will wander on a table, and it will last longer I think if I could make it avoid falling from the table. What will be the best way to make it detect the edge of a table so I can make it stop and turn around ? It have to be something reliable and preferably cheap. It will also be better if I don't need to add extra stuff to the table so I can use it on any surface (my first idea was to draw path lines on the table and make a line follower robot, but I don't like this idea very much).
A robotic joint is connected to two actuators, e.g. air muscles. One flexes the joint, while the other extends it. This arrangement is called 'antagonistic'. But what if I had an electric motor instead of the air muscles? In that case it can only pull on one tendon at a time, and it's not antagonistic. What it the arrangement called in this case? Untagonistic?
I'm trying to create a map of the obstacles in a fairly coarse 2D grid space, using exploration. I detect obstacles by attempting to move from one space to an adjacent space, and if that fails then there's an obstacle in the destination space (there is no concept of a rangefinding sensor in this problem). example grid http://www.eriding.net/resources/general/prim_frmwrks/images/asses/asses_y3_5d_3.gif (for example) The process is complete when all the reachable squares have been visited. In other words, some spaces might be completely unreachable even if they don't have obstacles because they're surrounded. This is expected. In the simplest case, I could use a DFS algorithm, but I'm worried that this will take an excessively long time to complete — the robot will spend more time backtracking than exploring new territory. I expect this to be especially problematic when attempting to reach the unreachable squares, because the robot will exhaust every option. In the more sophisticated method, the proper thing to do seems to be Boustrophedon cell decomposition. However, I can't seem to find a good description of the Boustrophedon cell decomposition algorithm (that is, a complete description in simple terms). There are resources like this one, or this more general one on vertical cell decomposition but they don't offer much insight into the high-level algorithms nor the low-level data structures involved. How can I visit (map) this grid efficiently? If it exists, I would like an algorithm that performs better than $O(n^2)$ with respect to the total number of grid squares (i.e. better than $O(n^4)$ for an $n*n$ grid).
I am having some issues with the ARDrone Parrot 2.0 and hope someone else may be running into the same thing. While hovering, the drone is (seemingly) randomly losing altitude then recovering . It is doing so while not being commanded any velocity inputs and should hold altitude. We are using the drivers from the ardrone_autonomy (dev_unstable branch) on github. We are able to watch the PWM outputs being sent to the motor and they are dropping from the hover command do a small value before exponentially returning to the hover value when this drop occurs. The issue could be a communication between the IMU and the onboard controller or on our software control implementation. Has anyone seen a similar problem or suggestions to test/troubleshoot what is happening?
I'm using Teensy hardware specifically. I have a Teensy 2.0 and a Teensy 3.0, and from the documentation it seems like there are two 16 bit timers available, and each should be able to control 12 servos. However, I've attached a logic analyzer and have confirmed that only the first 12 servos attached ever function. Is there anything special I have to do with my sketch in order to convince the Servo library to allocate the second timer for servos attached beyond number 12? This works: #define NUM_SERVOS 12 Servo servos[NUM_SERVOS]; // teensy 2.0 pins int pin_assignments[NUM_SERVOS] = {0, 1, 2, 3, 4, 5, 20, 19, 18, 17, 16, 15}; void setup() { for(int i = 0; i < NUM_SERVOS; i++) { servos[i].attach(pin_assignments[i]) ; } } But this, below, only ever shows activity on the first twelve pins attached: #define NUM_SERVOS 24 Servo servos[NUM_SERVOS]; // teensy 3.0 pins int pin_assignments[NUM_SERVOS] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23}; void setup() { for(int i = 0; i < NUM_SERVOS; i++) { servos[i].attach(pin_assignments[i]) ; } }
I have come across a number of methods for developing wall-climbing robots. Suction Chemical Adhesion Gecko like hair adhesion Electroadhesion Which method would be the best for heavy robots (5kg+)? Are there any other methods that I have missed?
I asked this question on answers.ros.org and gazebo.ros.org but still haven't got any answer. I'm posting my question here with the hope someone can help me. In our robot, the Kinect can be mounted on the side of the arm, as shown in the screenshot below. When running the simulation in Fuerte, I found this weird behaviour. As you can observe on the image, the point cloud does not match the robot model (we see a partial image of the hand/arm at the bottom left of the screenshot, which should be on the robot model). As soon as I rotate the kinect against its X axis (so that the kinect is horizontal as you can see on the second screenshot), then the point cloud and robot model are aligned properly. The kinect xacro and dae are the one from the turtlebot. I'm simply attaching them with a rotation: <joint name="base_camera_joint" type="fixed"> <origin xyz="0.01216 0.1713 0.433" rpy="-${M_PI/2} ${M_PI/4} -${M_PI/12}" /> <!-- This -pi/2 in origin rpy is the offending parameter --> <parent link="shadowarm_trunk"/> <child link="camera_link" /> </joint> The code can be seen on github. Any help is greatly appreciated!
In the prediction step of EKF localization, linearization must be performed and (as mentioned in Probabilistic Robotics [THRUN,BURGARD,FOX] page 206) the Jacobian matrix when using velocity motion model, defined as $\begin{bmatrix} x \\ y \\ \theta \end{bmatrix}' = \begin{bmatrix} x \\ y \\ \theta \end{bmatrix} + \begin{bmatrix} \frac{\hat{v}_t}{\hat{\omega}_t}(-\text{sin}\theta + \text{sin}(\theta + \hat{\omega}_t{\Delta}t)) \\ \frac{\hat{v}_t}{\hat{\omega}_t}(\text{cos}\theta - \text{cos}(\theta + \hat{\omega}_t{\Delta}t)) \\ \hat{\omega}_t{\Delta}t \end{bmatrix}$ is calculated as $G_{T}= \begin{bmatrix} 1 & 0 & \frac{υ_{t}}{ω_{t}}(-cos {μ_{t-1,θ}} + cos(μ_{t-1,θ}+ω_{t}Δ{t})) \\ 0 & 1 & \frac{υ_{t}}{ω_{t}}(-sin {μ_{t-1,θ}} + sin(μ_{t-1,θ}+ω_{t}Δ{t})) \\ 0 & 0 & 1 \end{bmatrix}$. Does the same apply when using the odometry motion model (described in the same book, page 133), where robot motion is approximated by a rotation $\hat{\delta}_{rot1}$, a translation $\hat{\delta}$ and a second rotation $\hat{\delta}_{rot2}$ ? The corresponding equations are: $\begin{bmatrix} x \\ y \\ \theta \end{bmatrix}' = \begin{bmatrix} x \\ y \\ \theta \end{bmatrix} + \begin{bmatrix} \hat{\delta}\text{cos}(\theta + \hat{\delta}_{rot1}) \\ \hat{\delta}\text{sin}(\theta + \hat{\delta}_{rot1}) \\ \hat{\delta}_{rot1} + \hat{\delta}_{rot2} \end{bmatrix}$. In which case the Jacobian is $G_{T}= \begin{bmatrix} 1 & 0 & -\hat{\delta} sin(θ + \hat{\delta}_{rot1}) \\ 0 & 1 & -\hat{\delta} cos(θ + \hat{\delta}_{rot1}) \\ 0 & 0 & 1 \end{bmatrix}$. Is it a good practise to use odometry motion model instead of velocity for mobile robot localization?
I'm having some technical problems... I'm trying to use Firmata for arduino but over nrf24, not over Serial interface. I have tested nRF24 communication and it's fine. I have also tested Firmata over Serial and it works. Base device is simple "serial relay". When it has data available on Serial, read it and send it over nRF24 network. If there is data available from network, read it and send it through Serial. Node device is a bit complex. It has custom Standard Firmata where I have just added write and read override. Read override id handeled in loop method in this way: while(Firmata.available()) Firmata.processInput(); // Handle network data and send it to Firmata process method while(network.available()) { RF24NetworkHeader header; uint8_t data; network.read(header, &data, sizeof(uint8_t)); Serial.print(data, DEC); Serial.print(" "); Firmata.processInputOverride(data); BlinkOnBoard(50); } currentMillis = millis(); Firmata processInputOverrride is little changed method of processInput where processInput reads data directly from FirmataSerial, and in this method we pass data down to method from network. This was tested and it should work fine. Write method is overloaded in a different way. In Firmata.cpp I have added an method pointer that can be set to a custom method and used to send data using that custom method. I have then added custom method call after each of the FirmataSerial.write() call: Firmata.h ... size_t (*firmataSerialWriteOverride)(uint8_t); ... void FirmataClass::printVersion(void) { FirmataSerial.write(REPORT_VERSION); FirmataSerial.write(FIRMATA_MAJOR_VERSION); FirmataSerial.write(FIRMATA_MINOR_VERSION); Firmata.firmataSerialWriteOverride(REPORT_VERSION); Firmata.firmataSerialWriteOverride(FIRMATA_MAJOR_VERSION); Firmata.firmataSerialWriteOverride(FIRMATA_MINOR_VERSION); } I have then set the overrided write method to a custom method that just writes byte to network instead of Serial. size_t ssignal(uint8_t data) { RF24NetworkHeader header(BaseDevice); network.write(header, &data, sizeof(uint8_t)); } void setup() { ... Firmata.firmataSerialWriteOverride = ssignal; ... } All stages pass right (I guess) and then I don't get any response from Node when I request pin states ... < f0 6a 7f 7f 7f ... 7f 0 1 2 3 4 5 6 7 8 9 a b c d e f f7 // analog mapping > f0 6d 0 f7 // sysex request pin 0 state and value > f0 6d 1 f7 > f0 6d 2 f7 ... > f0 6d 45 f7 // And I wait for response... There is no response. Any ideas why would that happen? Node receive all messages correctly and code for handling pin states exist.
Im in the process of making a robot which requires 12 3x10mm cylindric magnets for the construction. They are 30mm from the center of the robot where I plan to have the IMU. I was thinking about using MPU-6050. Do magnets affect the values? If yes, is there a solution for it? like maybe I could have a shield or something around the IMU?
I know that temperature influences the characteristics of semiconductors and other materials, but we know how and can take that into account. Furthermore, lower temperatures makes electronics more efficient, sometimes even superconducting. I remember reading somewhere that engineers building Curiosity even considered low temperature electronics for the motors driving the wheels but still decided against it in the end. Why is it, apparently, so hard to build components with operating temperatures matching those on Mars, Europa, or in space? Edit: None of the answers address my question thus far. I know that all parts, both electronic and mechanical, and greases and so on have relatively narrow working temperatures. My question is, why don't we build special cold metals and cold greases and cold chips that have their narrow operating temperature band at -100 C or whatever? Valid answers could be: it's too expensive, insufficient science has been done to determine materials appropriate for such cold, such cold materials cannot be manufactured in the sweltering heat of planet Earth.
I have a small quadruped with three degree of freedom legs which I have been working on: 3DOF Mini Quadruped. My original code for it was a simple servo controller on the arduino, and Scala code which would send servo commands over the wire. I did all the Inverse Kinematics and Gait logic in Scala, and got it to walk: 3dof quadruped first gait. My Gait logic in Scala was somewhat naive; it depended on the legs being in the right position at the beginning (one side extended fore and aft, the other side in toward each other). The logic was simply translate all four feet backward by 1mm along y, and whenever a coxa angle became excessively rearward, stop and perform a little routine where that foot is lifted 10mm in z, then translated forward 60mm along y, and set back down. Naive, but effective. Now, I have rewritten my IK code in arduino C, and I'm trying to decide how to move forward with the Gait dynamics. I've had a hard time finding good, easy to understand resources about gaits. I do have some knowledge about the difference between dynamically stable gaits (like creep gaits) where the body is a stable tripod at all times and dynamically unstable gaits (walking, trotting), where two legs are off the ground at a time and the body is essentially falling forward into the advancing leg. I had some thoughts about state machines and trying to calculate whether the body center falls within a triangle made by the remaining feet to decide which foot was safe to lift, but I'm not sure if these are ideas worth exploring. I know this is kind of an overly general question, but I'm interested to see how other people have attacked this problem, and about all I've been able to find are research papers.
I'm building a quadcopter and have discovered that most ESC have a built-in BEC, but I was wondering if it wouldn't be better to use only one. What if I delivered power to my four ESC with a unique BEC ? Would that work ? I think this would be easier to configure (you have to set it up only once for the four ESC) and it would prevent each ESC from having it's own behavior. Am I doing it wrong ? Here is an image of what I'm talking about : Edit : trying to find the original image and upload it. Given the answer by Ian McMahon it appears that this schema is not the right thing to do, since I had misunderstood the role of BECs. So would the right schema would look like this ? Edit : trying to find the original image and upload it. I'm still not sure if I'm getting it. Do I need 4 ESCs with integrated BECs and connect all three cables to flight controller ?
Here is the background. I am trying to write a service for the HiTechnic prototype board. Using the Appendix 2 from the blue tooth developers kit from Lego's site I am able to understand what is going on with this service I am trying to build however the response I get is always 221 = 0xDD = "Communication Bus Error" or 32 = 0x20 = "Pending communication transaction in progress". I figured out that the HiTechnic prototype board is using i2c address 0x08 so I modified the brick code to use that address instead of the standard 0x02. It goes out and configures the device, I get a response and then it does an LSWrite which seems OK then I get a get an error when it does the LSGetStatus. I know this thing works - I can bit bang it all day long with an Arduino but I only did that to test it out - see this link I am not sure what else to try. Here is how I am setting it up in the connect to brick handler. pxbrick.Registration registration = new pxbrick.Registration( new LegoNxtConnection(LegoNxtPort.Sensor1), LegoDeviceType.DigitalSensor, Contract.DeviceModel, Contract.Identifier, ServiceInfo.Service, _state.Name); Debugger.Break(); // Reserve the port LogInfo("ConnectToBrickHandler"); yield return Arbiter.Choice(_legoBrickPort.ReserveDevicePort(registration), delegate(pxbrick.AttachResponse reserveResponse) { Debugger.Break(); if (reserveResponse.DeviceModel == registration.DeviceModel) { registration.Connection = reserveResponse.Connection; } }, delegate(Fault f) { Debugger.Break(); fault = f; LogError("#### Failed to reserve port"); LogError(fault); registration.Connection.Port = LegoNxtPort.NotConnected; }); I have also tried setting AnyPort as well so that it will hit the TestPortForI2CSensorHandler that just does what I explained before - it seems to set the mode fine and then gets an error when it tries to read the device information. Here is the data. - this first part is the set input more - both the message and response - You can see it is totally fine. Send command data. 0 5 0 11 0 receive command data. (_commState.SerialPort.Read(receiveData, 0, packetSize);) 2 5 0 Then it does an LSWrite - everything still seems fine... You can see I have modified the NxtComm code to use 0x08 instead of 0x02 which it would normally use, then the last byte is also 0x08 which is the starting address of the manufacturer. It's asking for 16 bytes which would be the manufacturer and sensor type. like I said - I know that works I can print that info out using the Arduino. 128 15 0 2 16 8 // i2c address 8 //I assume this is what address I want to read from? Got response: True Error code Success [02/25/2013 02:20:31] -- SendCommandHandler (NxtComm) [02/25/2013 02:20:31] --- RequestResponseHandler (NxtComm) [02/25/2013 02:20:31] --- CommSendImmediateHandler (NxtComm) [02/25/2013 02:20:31] Send command data. Then it tries to get the status 0 14 0 Here is the response... 2 14 32 0 It's either 32 or 221. It's making me nuts... If anyone has anything that might help me out I would so much appreciate it. At this point I am running out of ideas. I can see what is going on, I can understand the entire transaction but can't seem to figure out why it just errors out like that. Also - just for grins I tried 0x10 which is what they tell you on the HiTechnic website. That gets a response of 2,14,0,0 from the NXT brick - that would indicate there is no data but as I pointed out I can get data using the Arduino. How could I have two different I2C device addresses?
I was able to find a small ESC for about $12 off of ebay. If you were designing a robot, would you see that and think? \$12 bucks for an ESC that connects to simple pulse-wave interface - sign me up! Or would you think: \$12 just to control a motor? I could throw together an H-bridge for $0.50 and be done with it. My robot in particular actually has two motors and therefor $24 to control the two of them. But the interface is really easy (plus has the added advantage of being R/C vs computer controlled with a simple change of connectors. Which way would you go?
I have written code to send data from controller to pc through serialport using interrupt but it echos garbage value exactly 3 times back. ISR(USART_RX_vect) { unsigned char index = UDR; UDR = index; } void uartInit() { UCSRA=0x00; UCSRB=0x18; UCSRC=0x86; UBRRH=0x00; UBRRL=0x67; UCSRB |= (1 << RXCIE); // Enable the USART Receive Complete interrupt (USART_RXC) _delay_ms(10); } int main(void) { uartInit(); lcd_init(); sei(); while(1) { } } EDIT: Function used to set baud rate.. #define FOSC 16000000// Clock Speed #define BAUD 9600 #define MYUBRR FOSC/16/BAUD-1 void USART_Init( unsigned int baud ) { /* Set baud rate */ UBRRH = (unsigned char)(baud>>8); UBRRL = (unsigned char)baud; /* Enable receiver and transmitter */ UCSRB = (1<<RXEN)|(1<<TXEN); /* Set frame format: 8data, 1stop bit */ UCSRC = (0<<USBS)|(3<<UCSZ0); }
I am doing Local Localisation with sonar, particle filter (i.e all particles are initially with robot pose). I have grip map of environment. When I execute algorithm in environment (where doors are closed/open), particles are not able to followup the robot. I don't have random particles since I know the initial position of the robot exactly. Adding random particle will change the pose of robot (i am find median of particles as robot pose). Any idea/methods how to improve local localisation? I want to know, do I need random variable if I am doing local localisation? And how do I improve localisation if there are many changes in the map without adding random particles?
I am looking to augment a GPS/INS solution with a conventional land vehicle (car-like) model. That is, front-wheel steered, rear wheels passive on an axle. I don't have access to odometry or wheel angle sensors. I am aware of the Bicycle Model (e.g. Chapter 4 of Corke), but I am not sure how to apply the heading/velocity constraint on the filter. So my questions are: Are there any other dynamic models that are applicable to the land vehicle situation, especially if they have the potential to provide better accuracy? Are there any standard techniques to applying such a model/constraint to this type of filter, bearing in mind I don't have access to odometry or wheel angle? Are there any seminal papers on the topic that I should be reading?
My team is building a robot to navigate autonomously in an outdoor environment. We recently got a new integrated IMU/GPS sensor which apparently does some extended Kalman filtering on-chip. It gives pitch, roll, and yaw, north, east, and down velocities, and latitude and longitude. However, we also have some encoders attached to our wheels, which provide linear and angular velocities. Before we got this new IMU/GPS sensor, we made our own EKF to estimate our state using the encoders and some other low-cost sensors. We want to use this new sensor's on-chip filter, but also incorporate our encoders into the mix. Is there any problem with chaining the filters? What I mean is, we'd use the output of the IMU/GPS sensor's on-chip EKF as an update to our own EKF, just as we use the data read from the encoders as an update to our EKF. It seems reasonable to me, but I was wondering what is usually supposed to be done in this case.
As a holiday project we are building a surveillance robot that is capable of transmitting live images using a webcam and is also capable of lifting small objects. It uses a CC2500 module for communicating with the robot. The interface is designed in Visual Basic 6 and it allows us to set the port of the computer to which the transreceiver is connected. It is connected via a USB to RS232 port (USB side is connected to the computer). We tried the settings as shown below and we get an error that the config is unsuccessful. We have tried the same settings in 4 different computers so far and it did not work. Circuit diagram for the robot: It is designed using an Atmel 89S52. Please tell us what settings to try to make it work
We are making a junior soccer robot and we just got our brilliant motors from Maxon. Setting the PWM timer to low-frequencies (around 39kHz or 156 kHz ) the robot acts as expected. But this produces some problems. It puts a heavy current on batteries (around 1.5A for 3 motors which is far too high). The high current causes our motor drivers (L6203) to heat up very quickly and even heat-sinks won't help them. The motors make such a bad sound as they are screaming and this is not normal. In contrast when I configure the timer on high-frequencies (such as 1250 kHz or 10000 kHz) the current drops off to 0.2A which is ideal and the sounds quit down. But this causes a problem that our 3 motors when set to run on their highest speed (PWM set to 255) don't run by the same rpm. like one of them runs slower than others making robot turn to a specific side and so our handling functions fail to work correctly. Asking someone he told me that the drivers don't respond the same to frequencies thus resulting in different speeds and because on low frequencies the difference is very small I won't notice it but on higher frequencies the difference becomes bigger and noticeable. So is there any workaround for this problem? or I should continue using low frequencies? PS: I'm using ATMEGA16 as the main controller with a 10 mHz external crystal.
I am following a guide that recommends using stepper motors and it has an approximate holding and operating torque. It says that if you don't know the operating torque, it is often half of the holding torque. I am adapting this to use with a servo and I was wondering can this same formula be used with a servo. My servo has approximately 1.98 kg/cm of torque so does that mean that I can estimate the operating torque would be ~1 kg/cm? A couple of things: I know operating torque and holding torque are different. This is just a estimate-it isn't an exact science. I know servos are harder to find their location (75 degrees, etc.) than to use a stepper and assume that it worked. I have external means of finding the location.
I am trying to control the force of a solenoid. My current system has a bank of capacitors connected to a relay. In order to control the force (how hard I am trying to hit the object) I am increasing or decreasing the the time the relay is on. The problem is this works but it either hits with too much force or way too much force. I can turn the relay on for 5 ms or more. If I try to turn it on for 1 ms it does not even respond. (I am using a mechanical relay.) I would like to have more control on how much of the energy I discharge so I can control how hard/soft the solenoid moves (say discharge only 10 percent of the total energy stored so it hits slower). While searching I found out about Solid State Relays which according to wikipedia can be switched on an off way faster that mechanical relay (of the order of microseconds to milliseconds). So my question is am I on the right track? or is there something better to achieve what I am trying to achieve?
In Probablistic Robotics by S. Thrun, in the first section on the Extended Kalman Filter, it talks about linearizing the process and observation models using first order Taylor expansion. Equation 3.51 states: $g(u_t,x_{t-1}) \approx g(u_t,\mu_{t-1}) + g\prime(u_t, \mu_{t-1})(x_{t-1} - \mu_{t-1})$ I think $\mu_{t-1}$ is the state estimate from the last time step. My question is: what is $x_{t-1}$? Also, the EKF algorithm following this (on table 3.3) does not use the factor $(x_{t-1} - \mu_{t-1})$ anywhere, only $g\prime(u_t, \mu_{t-1})$. So after being confused about $x_{t-1}$, I'm left wondering where it went in the algorithm.
I'm looking to use 4 of these 3.2V LiFePO4 batteries. I intend to have 2 pairs of 2 in series in parallel. So two 6.4V battery packs in parallel. Because my setup will be run off of this, it will also be easiest to recharge the batteries using the same setup. To accomplish this, I'm looking to charge all the batteries at once using this 6.4V LiFePO4 smart charger. From a simplistic standpoint, the resulting voltage should be correct and this should work fine. However, I know (from a previous question) that LiFePO4 battery chargers are a bit more complex then a basic voltage supply and check. Would the setup I've described work correctly? And in general, will a LiFePO4 smart charger be able to charge several batteries of the correct voltage at the same time so long as it doesn't try to charge them at too high an amperage? Or does a LiFePO4 battery also have a minimum amperage cutoff point to charge such that trying to charge more than one battery at a time will cause problems? Any other issues I didn't mention? Thank you much!
I saw this art drawing robot on youtube: http://www.youtube.com/watch?v=Wo15zXhFdzo What do I need to learn in order to build something like that? What are some beginner oriented projects that could lead up to building something like this? I'm an experienced programmer but I have very little hardware experience.
I am using the Pololu Micro Serial Servo Controller connected to an Arduino and multiple other servos (4 total) to make a robot arm. Two of the four servos require 4-6 volts, while the other 2 require 7-10 volts, so I am planning on powering all the servos separate from the Pololu. I have the Arduino and Pololu connecting to each other correctly (flashing green led), but the servo(s) don't move when plugged in to the control pins. All the servos work correctly when plugged into a servo-tester. I think that this problem could be fixed by connecting the grounds of the servos to the ground of the Pololu, but would like advice because I am not sure if it will work, or will end up frying one of the parts (We already fried a pololu). Would connecting the grounds of the batteries to the ground of the Pololu help, or damage the parts? , but I couldn't figure out how to show the micro serial servo controller.
I would like to build a robot which follows a virtual path (Not a visible path like a 'black line on a white surface', etc). I'm just enthusiastic by seeing some sci-fi videos which show robots carry goods and materials in a crowded place. And they really don't follow a physical line. They sense obstacles, depth, etc. I would like to build one such robot which follows a specific (virtual) path from point A to B. I have tried a couple of things: Using a magnetic "Hall effect" sensor on the robot and wire carrying current (beneath the table). The problem here was that the vicinity of the hall effect sensor is so small (< 2cms) that it is very difficult to judge whether robot is on the line or off the line. Even using series of magnets couldn't solve this issue, as my table is 1 inch thick. So this idea flopped :P Using an ultraviolet paint (on a line) and using UV leds on the robot as sensors. This will give more Zig-Zag motion for the robot. And due to potential threats of using UV light source, even this idea flopped :P I finally thought of having a camera on top and using image processing algorithms to see whether robot is on the line or diverging. Is there any better solution than this? Really looking for some creative and simple solutions. :)
Does anyone know if small mechanical actuators exist which can be controlled electrically, sort of like a miniature joystick, but in reverse. Instead of it picking up mechanical movement and outputting electrical signals, I want it to generate mechanical movement controlled via my electrical input signals. I’ve searched for : electromechanical actuators, not finding what I need. Think of a pencil attached to a surface which can pivot to point anywhere in its half dome. I’m thinking small, on the order of an inch. It will not be load bearing. My goal is to programmatically control the normal pointed to by a small flat surface attached to the end of each joystick rod. Accuracy is more important than speed. From across a small room, say 10' by 10', I'd like the surface normal to accurately point to arbitrary objects in the room, say a person walking across the room. If I can cheaply buy/build such mechanisms to control the movement of these small flat surfaces, I would like dozens places across the walls of the room. Its for an electromechanical sound project I’m planning.
I'm building a quadruped and I'm not sure of the features I should be looking for in a servo motor. e.g. digital vs analog, signal vs dual bearings. Some of the ones I'm considering are here
I am working on this project that involves using the Kinect for XBOX 360S with ROS. I did all the steps mentioned in the ROS Tutorials to have Openni Installed and the Prime sense and other drivers. and when i go to Openni samples i see a output. But in ROS i do a roscore and in another terminal do a roslaunch openni_launch openni.launch. And it loads with the regular calibration warnings and service already registered errors. Then in another terminal i open Rviz which gives a error /.rviz/display_config does not exist. And even though i accept the error and go ahead i see a black window which shows no output ,even if i do all tasks mentioned at the RVIZ Tutorials. Also i tried running "rosrun image_view image_view image:=/camera/rgb/image_color" and it shows up a blank window with no output. How do i resolve this and get ros to show my kinect data?? I need to run RGBDSLAM and use this kinect later. I am on Ubuntu 12.04 and ROS-Fuerte. Well when i launch the openni.launch it starts as usual except for the errors ¨Tried to advertise a service that is already advertised in this node. And when i run a rostopic it just says subscribed to the /camera/depth_registered/points and cursor keeps blinking. Even subscribing to the rectified topics just says subscribed and nothing more happens.
I'm considering experimenting with PIV control instead of PID control. Contrary to PID, PIV control has very little explanation on the internet and literature. There is almost a single source of information explaining the method, which is a technical paper by Parker Motion. What I understand from the control method diagram (which is in Laplace domain) is that the control output boils down to the sum of: Kpp*(integral of position error) -Kiv*(integral of measured velocity) -Kpv*(measured velocity) Am I correct? Thank you.
I'm trying to learn about servo control. I have seen that the most generic position control method for servos is PID, where the control input is position error. However, I am not sure about what is the actuated quantity. I am guessing that it is one of: Voltage applied to the motor Current applied to the motor I am then guessing that the actuated quantity gets turned into one of: Torque that the motor exerts Angular velocity that the motor runs at I haven't been able to get my hands on and explicitly control a physical servo so I cannot confirm that the actuated quantity is any of these. I know very little of the electronics that controls the motor. It might well be that the controlled quantities are different for different series servos. My bet is on torque control. However, assume that the servo is holding a weight at a distance (so it is acting against gravity), which means an approximately constant torque load. In this case, if the position error is zero and the servo is at rest, then each of P, I and D components are zero, which means the exerted torque is zero. This would cause the weight to sink, which is countered by the error in its position causing P,I components to increase. Wouldn't this situation cause the lifted weight to oscillate and balance at a constant position which is significantly different from the goal position? This isn't the case with the videos of servos I have seen lifting weights. Or is this the case and friction is smoothing everything out? Please help me understand.
Is there web mapping tool that allows developers to use it to plot GPS data of autonomous vehicles/robots? Google Maps forbids it. See 10.2.C. Google Earth terms of use link jumps to the same page. Bing Maps looks the similar (see 3.2.(g)). What I want is a internet-based tool that shows either/both satellite images and/or map, which can overlay plot using its API. I'm making a generic GPS plotter on ROS that could be used both for slow robots or fast vehicles/cars. Thanks!
I've already built a two wheeled balancing robot using some continuous rotation servos and an accelerometer/gyroscope. I upgraded the servos to some geared DC motors with 8-bit encoders with the goal having the robot drive around while balancing. I'm kind of stuck on how to program it to drive around while still balancing. I think one way would be to just have the control input to the motors act sort of like pushing it. So the robot would be momentarily unbalanced in the direction I want it to travel. That seems kind of clumsy to me though. There must be a better way of doing? I think I need to combine the dynamic model for the balancer with the differential drive but this is a bit beyond the control theory that I know. Update From Anorton's answer I have a good looking state matrix now. Now about pole placement: The A matrix will will have to be 4x4 based on the new state vector. And B will then have to be a 4x2 matrix since I can only control the left/right wheel torque (u = 2x1 vector). I may need to read more about this but is there a systematic way to determine the A matrix by pole placement? It seems to me for this example and even more complicated examples, determining A by guess and check would be very difficult. Update #2 After a bit of reading I think I understand it now. I still need the dynamics of the robot to determine the A matrix. Once I have that I can do the pole placement using matlab or octave.
I am writing a method (Java) that will reset the position of e-puck in Webots. I have been following tutorial on Supervisor approach. I have two controllers in my project: SupervisorController extends Supervisor - responsible for genetic algorithm and resetting e-puck's position EpuckController extends Robot - drives the robot Robots are communicating via Emitter and Receiver, and everything works fine but the position reset. This is what I'm doing in SupervisorController: 412 Node epuck = getFromDef("epuck"); 413 Field fldTranslation = epuck.getField("translation"); And as a result I get this exception: [SupervisorController] Exception in thread "main" java.lang.NullPointerException [SupervisorController] at SupervisorController.initialise(SupervisorController.java:413) [SupervisorController] at SupervisorController.main(SupervisorController.java:497) epuck variable is null. I tried calling different methods on epuck, and they all resulted in NullPointerException. The name of e-puck matches the world file. DEF EPUCK DifferentialWheels { translation 0.134826 -0.000327529 0.107963 rotation 0.0244439 0.999246 -0.0301538 1.95838 children [ (........) ] name "epuck" controller "EpuckController" axleLength 0.052 wheelRadius 0.0205 maxSpeed 6.28 speedUnit 0.00628 } I would appreciate any advice on how to get a handle to the robot or where to look for issues in simulation/code.
I want to make a list of what knowledge is necessary for sensor fusion. Since it has a wide array of possible applications, it is not clear where to begin studying. Can we please verify add topics that are in-scope, and specify to what extent?: Digital Signal Processing course Probability Course Machine Learning - course at Coursera from Stanford University Programming robotic car - course at Udacity Knowledge of Matlab and Simulink - tutorials on mathworks webpage and offline help. Basic knowledge about integrals, matrices operations, differential equations.
Currently I am reading a book of Mr. Thrun: Probabilistic Robotics. I find it really helpfull to understand concept of filters, however I would like to see some code in eg. Matlab. Is the book "Kalman Filter for Beginners: with MATLAB Examples" worth buying, or would you suggest some other source to learn the code snippets from?
I want to built a robot and i need bunch of modules to track it like GSM/GPS Wifi and Camera If i try to buy each of module separately it will cost me 300 dollar each aprox in Pakistan. On the other hand an android enable phone can be purchased on just 250$ having all of them. I was wondring if it is possible to interface android phones like (Huawaii or google nexus) with 8-bit microcontrollers or Arduino? The only port available with android phones are USB and Arduino supports USB. It is possible to some how attach both of them?
For example, if a rover has working temperature range of -70 to +120 Celsius, how does it survive and then restore itself if the temperature drops to -150 degrees for several months?
Within robotics programming, orientation is primarily given in terms of x, y, & z coordinates, from some central location. However x, y, z coordinates aren't convenient for rapid human understanding if there are many locations from which to select (e.g., {23, 34, 45}, {34, 23, 45}, {34, 32, 45}, {23, 43, 45} is not particularly human friendly, and is highly prone to human error). Yet more common English orientation descriptors are frequently either too wordy or too imprecise for rapid selection (e.g., "front-facing camera on robot 1's right front shoulder" is too wordy; but "front"/ "forward" is too imprecise - is the camera on the leading edge or is it pointing forward?) In the naval and aeronautical fields vehicle locations are generically talked about as fore, aft (or stern), port, and starboard. While, direction of movement that is relative to the vehicle is frequently given in reference to a clockface (e.g., forward of the the fore would be "at 12", rear of the aft would be "at 6", while right of starboard and left of port would be "at 3" and "at 9", respectively). This language supports rapid human communication that is more precise than terms such as "front" and "forward". Are there equivalent terms within mobile robotics?
I need help with figuring out the following things: I'm developing a hexapod R-Hex type model with a tripod gait. However, the angles obtained during the robot's walking in real life are not perfectly aligned. Because of this the robot often collapses and falls even on perfectly straight terrain. My configuration is: 60 rpm dc motors for each leg H-bridge motor drivers for each dc motor Atmega 8 Should I change the gait type? Or is Tripod sufficiently stable? Are DC motors providing fine enough control or do I need servos? Do I need a DC motor with an Encoder? What will be its benefits? What could be done to improve performance of the robot? Added: Would a Stepper motor work as well, instead of a servo?
I want to know if there is best algorithm and technique to implement self learning maze solving robot in 8 bit limited resource micro-controller? I was looking for some well optimized algorithm and/or technique. Maze can be of any type. Of-course first time it has to walk all the way and keep tracking obstacles it found. I think best technique would be neural networks but is it possible to do this in such a short resources of 8bit? Any example online with similar kind of problem? My wall detection is based on units, well, I have counted the wheel turns and it is almost 95% accurate in integers. For sensing the walls Ultrasonic range finding is used. Wheel can remeber its current position in let say, 3 feet staight, 2 feet right, etc.
I want to prototype a therapeutic device that will have a lot of tiny mobile-phone type vibration motors like this one in it, and I want to be able to activate them in any configuration I want. I'm going to need analogue control, and support for logic like perlin noise functions and so on. I'm not really going to need sensor data or any other kind of feedback beyond a few buttons for control. I just need fine control over lots of little motors. Depending on what results I can get out of, say, 25 motors on the initial prototype, I may decide that I'm done, or that it needs more motors. I also don't have an enormous budget. So the question is, is Arduino a good fit for a project like this? Is it feasible to get that many motors working off the same controller? I know some of the Arduino boards have up to 50-something serial outputs, but from what I can tell, that may only translate to 25 or so motors, so I'd need a way to extend the board with more serial outputs if I wanted to try more. Additionally, if Arduino isn't a good fit, what would be better? Could I try something directly out of a serial port on my PC? I've never tried to home-cook a robotics application before, so I'm not really aware of the options.
I currently have a working kinematic chain made by a set of ten links in D-H convention (with usual parameters [ $A_i, D_i, \alpha_i, \theta_i$]). But my task currently requires the inversion of some of them. Basically, I would have a part of the chain that is read from the end-effector to the origin, using the same links (and thus the same parameters). Is it possible? How to do so? Please notice that this is not related to the inversion of the kinematic chain. It's more basic: I want to find the dh parameters of the inverted forward kinematic chain. Let's put it simple: I have dh parameters for a 2 link planar chain from joint 0 to joint 1, so I can compute its direct kinematics. But what if I want to compute the direct kinematics from joint 1 to joint 0? Given DH parameters [ $A_i, D_i, \alpha_i, \theta_i$], I can retrieve the transform matrix with this formula: $G = \left[\begin{matrix} cos(\theta) & -sin(\theta)*cos(\alpha) & sin(\theta)*sin(\alpha) & cos(\theta)*a \\ sin(\theta) & cos(\theta)*cos(\alpha) & -cos(\theta)*sin(\alpha) & sin(\theta)*a \\ 0 & sin(\alpha) & cos(\alpha) & d \\ 0 & 0 & 0 & 1 \end{matrix} \right]$ This is the transform matrix from the i-th link to the (i+1)-th link. Thus, I can invert it to obtain the transform matrix from the (i+1)-th link to the i-th link, but the problem is that this is not working. I believe that the reason is related to the fact that the DH convention doesn't work any more as it is. Any help?
I am trying to run a nxt motor using the mindsensors motor multiplexer at a slow speed. When I turn it on, it tends to jump approx 20 to 40 degrees before moving at a slow speed. Has anyone seen this behavior? I am using NXT 1.0 with firmware down loaded from lms_arm_mindsensors_129.rfw. Sample code in NXC (I am using Bricx Command Center as my IDE) is as follows: MMX_Run_Degrees (SensorPort, Addr, MMX_Motor_2, MMX_Direction_Reverse, MMX_Speed_Slow, 220, MMX_Completion_Wait_For, MMX_Next_Action_Brake); Wait(500); MMX_Run_Unlimited( SensorPort, Addr, MMX_Motor_2,MMX_Direction_Forward, 5); // The jump happens here. while(Sensor(IN_2)< SENSORTHRESHOLD);
Is there a way to use a single dc motor output for two different loads (by using gears, relays etc) ? Please see the illustration below: To clarify the illustration, I get the dc motor power at "output 1" gear which is extended over an idler gear to output 2 gear. All three are in contact (though the picture doesn't quite show it). Load 1 and Load 2 are two separate gears connected to different loads (wheels etc) and initially not in contact with the bottom gears. On switching on the relays 1, 2 - the load bearing gears, move towards the output1 and output2 and mesh with them to drive Load 1, 2.
This question was asked in electronics stackexchange. I want to know if is it possible to make a robot that can fly kites. Is this idea practical? I was thinking that making a kite is just like making some flying quadcopter or helicopter. I just want to know is this idea really implementable? Is there an example or similar work in reference to this?
I'm building a quadcopter and I've received my motors and propellers. What's the right way to assemble those together? I'm not confident with what I've done, as I'm not sure the propeller would stay in place on a clockwise rotating motor. I mean, if the motor rotates clockwise, will the screw stay tightly in place, even with the prop's inertia pushing counter-clockwise? Here's what I've done (of course i'll tighten the screw...) :
For a pet project, I am trying to fly a kite using my computer. I need to measure how far a cord extends from a device. I also need to somehow read out the results on my computer. So I need to connect this to my pc, preferably using something standard like USB. Since the budget is very small, it would be best if I could get it out of old home appliances or build it myself. What technology do I need to make this measurement?
I have a 4-bar linkage arm (or similar design) for a telerobot used in the VEX Robotics Competition. I want to be able to press buttons on my PS3-style controller to have the arm raise to certain angles. I have a potentiometer to measure the 4-bar's angle. The potentiometer measures the angle of one of the joints in the shoulder of the mechanism, which is similar to this: What type of control should I use to stabilize the arm at these angles?
The textbook I'm using doesn't have the answers to the practice questions, so I'm not sure how I'm doing. Are the following DH parameters correct given the frames I assigned? The original question is as follows: The arm with 3DOF shown below is like the one in example 3.3, except that joint 1's axis is not parallel to the other two. Instead, there is a twist of 90 degrees in magnitude between axes 1 and 2. Derive link parameters and the kinematic equations for $^bT_w$ (where b means base frame and w means wrist frame). Link Parameters: $$\begin{array}{ccc} & const. & and & & \\ & angle & distance & & \\ i & \alpha_{i-1} & a_{i-1} & d_i & \bf{\theta_i} \\ \\ 1 & 0 & 0 & 0 & \theta_1 \\ 2 & -90 & L_1 & 0 & \theta_2 \\ 3 & 0 & L_2 & 0 & \theta_3 \\ \end{array}$$ Frame Assignments:
How do I increase the rotation range of a standard servo? Most servos have a rotation range of ~ 180 degrees. I would like to access the entire 360 degree range on the servo, partially because I would be attaching the servo's shaft to the robotic wheel and would like it to be able to make full rotations. Or is that not possible? I would not like to however lose the 'encoder' part of the servo which allows me to clearly specify which angular position the wheel should stop at. If I use gears in order to transform this system range, would it lead to loss of precision? In addition, would such a transform allow the wheels to continuously rotate in one direction? From what I understand, this won't work. Would a stepper motor with an external encoder or a dc motor with an external encoder work?