instruction
stringlengths 40
28.9k
|
---|
Currently, I'm using a pan tilt bracket that works with the servos I have (spark fun product 9065), and are generic to all servos. However, servos aren't accurate enough. I'm looking at other options, but have limited knowledge about them. What would you recommend?
An important consideration is that I need my motor to tilt in two dimensions, to any random $x, y$ coordinate accurately. To do this, I think I'll need some sort of attachment to the motor shaft, like the pan tilt bracket.
What motor and bracket would you recommend I use to go to any random $(x,y)$ coordinate?
|
I'm looking at laser scanners and I see a huge range of detection distances. The furthest I've see are 30m if you exclude the very expensive ones that claim up to 150m. My question is what is the reason for the huge difference in range/price. I would think that with a laser it wouldn't be that difficult to detect something at distances greater than 30m. What's the physical limitation that makes it so much more expensive to go above 30m for a laser scanner or is there one?
|
I haven't made or flown any quadcopter, but I am fascinated by them.
But when looking at the frame of a lot of designs, after whachting this video I wondered why a lot the frames are in an X shape. Since the most efficient shape would according to the video be something like this >-<, where each corner is 120°.
I also did a quick search on the internet and found this blog which stated the same (however he did not mention the exact angle) and said: "Even though this is not entirely a new idea, it has not yet been widely accepted by the community."
|
How do you know if a commercial driver is working with trapezoidal or sinusoidal commutation? If you measure the 3-phase voltage applied to the PMSM by means of an oscilloscope, will you see a difference?
|
The last robotics project I worked on involved autonomous outdoor navigation, using a microcontroller for lower-level control and a computer for image processing and decision making. I worked more with the higher-level software and another guy on the team did the electrical and embedded systems. I would like to be capable of doing everything, including stuff with embedded, but I'm not sure where to look for information. If I were to have done the project from scratch on my own, I'd need to know:
What microcontroller to use
What motors are required
What motor controllers to get, and how to interface with the controllers
What encoders to use for motor feedback, and how to write drivers for them
What batteries to use and how to safely power everything
If I were trying to learn about higher-level software, I'd probably take a few courses on Udacity. Are there any good resources out there like that for this kind of low-level stuff?
|
I'm doing a project related to telemetry, and I want to make ArduPilot (programmed with ArduPlane 2.73) send through a serial port the sensors informations such as height, GPS position, etc.. I have tried to use ArduStation, but I could not change its firmware to do what I want.
The idea would be to read the Ardupilot's serial port using an Arduino Uno, and then saving it in a SD card in real-time. So ArduPilot needs to send data without any user input or something like that. I've already tried to manipulate ArduPlane source code to do that, but I couldn't either.
Has someone here did something like that before? I need some advice!
|
I am trying to implement a line following robot that can solve mazes similar to the Pololu robots you can watch on youtube. My problem is the maze that I am trying to solve is looped and therefore simple Left/Right hand rule can not solve the maze. I have done some research and think either Flood-Fill or Breadth-First-Search algorithm will be able to solve these looped mazes. Solving the maze is reaching a large black area where all the sensors will read black. When the robot is following the line some of the sensors will read white and the central ones black.
Is there any other algorithms that can solve looped mazes? I understand how the Flood-Fill algorithm works but am unsure how it could be implemented in this situation.
My robot has 3 sensors on the bottom. The one in the centre is expected to always read black (the black line it follows) and the sensors on the right and left are expected to read white (while following a straight line) and then black once a junction or turn is reached. My robot has no problem following the line, turning etc. I need to find an algorithm that can solve looped mazes (that is, find a path from an entrance to an exit).
|
I'm trying to understand how noise is represented for accelerometers, gyroscopes, and magnetometers so that I can match the requirements of my project with the standard specs of these sensors.
I want the output of a 3 axis inertial sensor to be values in meters/sec/sec, gauss, and radians/sec for each axis, and noise to be represented by a range around the true value (so X as in +/- X m/s/s, gauss, and radians/sec) for accelerometers, magnetometers, and gyroscopes respectively. Switching out gauss for teslas, meters for feet, radians for degrees, etc. would all be fine.
After looking at a few datasheets, I'm surprised to find that...
Accelerometer noise is measured in "LSB rms" and "μg/√Hz"(https://www.sparkfun.com/datasheets/Sensors/Accelerometer/ADXL345.pdf, http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Accelerometers/MMA8452Q.pdf)
Gyroscope noise is measured in "º/s-rms" and "º/s/√Hz" (https://www.sparkfun.com/datasheets/Sensors/Gyro/PS-ITG-3200-00-01.4.pdf)
Magnetometer noise is measured in "µT rms" and "Gauss/√Hz" (http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Magneto/MAG3110.pdf, http://www.vectornav.com/products/vn200-rug?id=54)
What do these units mean, and how do they (or can they) translate into what I want?
|
I want to do a project where I control motors, and the arduino seems ideal for that. However, I'm going to need to do quite a bit of processing on the back end first. Connecting the arduino to a computer isn't an option, because it has to be quite mobile. Does anyone have any experience controlling arduino with raspberry pi?
|
I was wondering if it is possible to plug a motor shield on top of an Ethernet shield, even though the direction pins on the motor shield would be connected to the same pins as the spi bus. I was thinking that it would work if, in the coding, I disabled both chip selects on the Ethernet shield before I used the motors.
|
TL;DR:
Can anyone point me to a good adaptive path fill algorithm?
Hey there, my name is James, and my daughter built an awesome painting robot with her friends over at Evil Mad Scientist labs, even brought it to the white house, very fun stuff!
Anyways, I'm a web programmer by day, and though it might be fun to try and write some software to get the robot to do some cool stuff, and for some crazy reason I decided to do this using standard web technologies like SVG and JavaScript, and.. it actually works! [see the project at github.com/techninja/robopaint]
But there's a problem: to fill in colors for given shapes, it requires some kind of path filling algorithm using a given size shape to cover every part of the shape internally, not to mention it has to take into account overlapping paths and occlusions.
I have successfully created "fake" fills, by following known paths like spirals and back and forth hatch lines over paths, while detecting occlusion using browser internal functions for detecting what object lies at a given x/y coordinate, but these fill functions fall incredibly short of doing anything other than simply following paths, and can be incredibly inefficient at filling certain paths (like filling borders, letters, or U shaped areas).
The question: I need an adaptive path filling algorithm. I know they're currently being used in similar CNC setups for milling, and similar algorithms are used by the Roomba and 3D printers to figure out coverage in the most efficient way possible. The issue comes in that I don't think any have ever been done in JavaScript, using native SVG paths.
Anyone out there know where I should look? I'm not too afraid to attempt to port something over to JS, or possibly even use it as is for a native Node.JS module. All my work will be sure to go back to the community and become open source as well.
Thanks for the help!
|
3D CAD:
After some editing:
I'm building the frame myself from aluminum and polycarbonate.
EDIT: all frame is polycarbonate except of the arms (and all screws & nuts & washers)), they are form aluminum covered in polycarbonate. the total length (from one motor to another) is exactly 60cm
Approximate Mass: 1.736 kg
What I thought to put in:
Battery: ZIPPY Flightmax 4000mAh 3S1P 20C (changed from this 1500mAh battery)
Motors: Turnigy Aerodrive SK3 - 2822-1090kv
PDB: Hobby King Quadcopter Power Distribution Board
ESC: Hobbyking SS Series 18-20A ESC
Core: Arduino Nano
IMU: Invensense MPU-6050
Propellers: 10x4.5 SF Props 2pc Standa## Heading ##rd Rotation/2 pc RH Rotation (changed from 8x4.5 ones)
I'm gonna program it myself (or atleast try to).
I WANT TO USE IT AS A UAV (self controlled)
Questions:
Is this a good setup?
It costs to much for me, any way to keep the frame and change the
parts to be cheaper and yet a good quadcopter?
Does the propeller fit the motor?
Any place to buy these for cheaper?
Is it a good frame?
Am I missing some parts?
Thanks alot,
Dan
EDIT:
I checked it using this site and it gave no errors (guess its a good sign).
The camera on the image is just for the picture (no camera is intended there).
|
If a motor has more Power (W) from another it means that it is better?
|
Just out of curiosity, can the concept of a Quadcopter be applied to an ROV? Would it work the same way underwater as it would be in Air? If not what kind of modifications it would take to implement that idea, underwater?
|
I am interested in starting a robotics club at my high school next year, since we don't have one, and I want to know what the pros/cons are of common kits. I already have a Vex kit since my uncle is one of the regional suppliers for Indiana, but I'm not sure that Vex is the best choice. What are the pros/cons of the other kits out there?
Note: I looked at Lego Mindstorms, and have used them when I was learning robotics, but do not want them for this. Also, i own a Raspberry Pi.
|
I'm trying to find the optimal-time trajectory for an object (point mass) from point p0 with initial velocity v0 to point p1 with velocity v1. I'm considering a simple environment without any obstacles to move around. The only constraint is a maximum acceleration in any direction of a_max and optionally a maximum speed of s_max.
This is easy to work out in 1D but I struggle with the solution 2D and 3D. I could apply the 1D solution separately to each dimension, but this would only limit acceleration and speed in a single dimension. The actual acceleration might be larger in multiple dimensions.
Are there any textbook, closed-form solutions to this problem? If not, how can I solve this in a discrete-event simulation?
|
I'm endeavoring to prototype a challenging sorting mechanism inside a fridge and would appreciate any constructive tips on how to get from the specs to a plausible design.
Problem
The aim of the game is to identify and sort food items in the limited space of a fridge
- such that a user would push their unsorted shopping into a chamber at the top of the enclosure
- and the machine inside would then try to identify the contents with help of bar-codes (first big problem)
- and then sort and move the items according to their identities into different chambers below (second big problem).
Solution?
Are there any existing devices that already serve such functions (automatic bar-coding and sorting), the designs of which could perhaps inform the mechanics of the device I'm planning to construct?
I'm thinking maybe manufacturing plants
or packing factories with conveyor belts etc may use systems that already solve such problems?
Or filtering mechanisms in candy dispensers,
mechanized lifting forks?
Textbook engineering mechanisms?
|
It seems popular to build robotic systems with separate supplies for the electronics and motors/servos, but what would be involved in a circuit that powers both from one source?
If it helps, here is my particular situation. I want to use a single 7.4V Li-Po pack to power an embedded system (2.5-6V operating voltage input, max 1A current draw) and up to 18 standard-sized servos (HS-645, 4.8-6V operating voltage). This would be a hexapod, so it is unlikely that all servos would be stalling at once with normal gait patterns.
|
My team developed a filling machine for liquids which uses a piston to measure and deliver volumes 0.1 to 1 liters into bottles. It is built mostly with mechanical parts and we'd like to replace most of them with electronic ones.
How do I build a machine to pull liquid from a reservoir and fills a bottle using a piston, with electronic parts such as stepper motor, linear actuators and sensors?
I understand this is somewhat vague. Any aligned response is appreciated.
Update:
This machine should, at its max speed, fill a 1 litter bottle with water in 2 seconds (to deliver 30 bottles per minute). Higher viscosity liquids may take longer.
It should not spill so liquid needs some filling acceleration control.
You may assume two operation modes: with bubbles and without bubbles. The first is a plus.
I'd like to be able to change the volume electronically (via a LCD menu).
I thought of a single main valve that switches between the reservoir and the bottle. That should be controlled electronically too. I could use two valves too.
|
I'm looking for an electrical explanation of the statement here:
Warning: Do not connect power (red + & black -) from the RC10 (A10) &
RC11 (A11) connectors to the servos, just use the signal lines. Power
the servos via the PWM Outputs connectors. These solutions will avoid
the scenario that can possibly happen when the Pan/Tilt servo draw too
much current and cause the APM to brownout (reset)
I examined the APM2.5 board drawing and schematic here and the grounding is a little unclear. On the schematic, they all just go to GND, but on the board drawing some of the ground pins appear unconnected to traces. I checked for continuity, and there is no continuity between the PWM grounds and the A10/A11 ground pins. By the way, my power setup is that I have J1 enabled and I am using an ESC to power the board.
Can anyone figure out, electrically, what is between these two sets of ground pins?
Ground pins appear unconnected to the traces:
PWM ground just connected to GND:
analog output ground also connected to GND:
|
I am building a robot where power density is critical, and looking into using lithium thionyl chloride (SOCl2) batteries. I will be drawing around 20A constantly and need between 12 and 17V. The batteries I have found so far are AA-sized and designed to deliver 100mA, 3.6v, 2.4Ah, and weigh 19g each. I could picture a pack with 4 blocks of these batteries in series, where each block has 20 batteries in parallel. That would mean 48Ah, which is way more than the ~10Ah that I need, and 1.52kg, which is more than I would like the robot be carrying.
So, the question is, is there a way to achieve 10Ah at 20A and 14.4V (i.e. for 5 hours) using SOCl2, carrying much less weight than 1.52kg?
|
Are there any Python3 modules used to program robotic movement by declaring a device or component and then providing the instructions? I am not looking for modules that test the components.
|
So I built three little bots so far.
One with a raspberry-pi, (6V motors), one with an arduino (12V motors), and another with an arduino but a hacked remote car (7ish, volt motors):
The problem I have with all these is that the motors spin so fast the car just bumps into something in a few seconds. (I have a small house)
I tried to use PWM to control the speed, but at a little less than full throttle (255) they jam up and they can't take the weight I put on them.
Should I buy one of these chassis that slow the motor down and give it torque with a gearbox, or is there anything else I can do?
|
I'm looking for a way to create a non-rotating persistence of vision device. I have all the electronics set up but I'm stumped with the mechanical design.
I tried these two designs:
But these didn't work so well. Everything shakes violently and it doesn't go nearly as fast as I need (about 20 swipes per second)
Any ideas on how I can build this?
|
I have two Kinects (each on its own USB card) whose cameras I'm watching in RVIZ through OpenNI, and the structured light pattern of one is flashing intermittently - it's only there for a short flash every two seconds. Obviously, the structured light depth calculations only work if the pattern is always projected when the camera is looking at it.
What causes this, and how do I fix it?
EDIT: I was also having this issue with a single Kinect, for which I discovered the issue, as detailed in my own answer. However, the problem persists when two Kinects are plugged in, with one Kinect flashing and the other functioning normally.
|
If a street is extremely crowded to an extent that the terrain is not visible from the point of view of the LIDAR (e.g. in google's self driving car), can it still manage to localize itself and continue to operate? I recall Sebastian Thrun saying that Google's car cannot navigate through snow filled roads since the onboard LIDAR cannot map the terrain beneath the snow (e.g. here).
[Edit : Based on the comments] Clarifying the context, here "not visible" means there is an obstacle between the LIDAR and the terrain
|
I'm working on building a line follower robot and want to optimize its performance. It was suggested that I use a PID algorithm. I read a lot about PID but am confused a bit regarding following:
I've calculated the error_value using $k_p * proportional + ...$
But regarding the change in the motor speed I'm confused as to what to use during comparison the difference (i.e. currentposition - setpoint) or the errorvalue. That is should I use
if (difference > 0)
{ //code for changing appropriate motor's speed using error_value }
or
if (error_value > 0)
{ //code for changing appropriate motor's speed using error_value }
Also is there any specified range for the values of the constants $k_p$, $k_i$ and $k_d$?
I'm using a differential wheeled robot for my line follower.
Also I would be happy if someone suggests me any other advanced optimization algorithm for improving the line follower robot.
|
I have built a Particles Filter simulator and I wanted to add the following functionalities.
Limited Range Vision (Robot can see up to 50 meters)
Limited Angle Vision (Robot can see within a certain angle w.r.t its current orientation. e.g. If the current orientation is 30 degree then it can see in the range from 0 to 60 degree.)
I have managed to add the Limited Range Vision functionality but unable to add Limited Angle Vision.
Method to Sense the landmarks distance within the range
public double[] sense(boolean addNoise) {
double[] z = new double[World.getLandmark().getLandmarks().size()];
for (int i = 0; i < z.length; i++) {
Point lm = World.getLandmark().getLandmarks().get(i);
double dx = x - lm.getX();
double dy = y - lm.getY();
double dist = Math.sqrt(Math.pow(dx, 2) + Math.pow(dy, 2));
if (addNoise) {
dist += Util.nextGaussian(0, sense_noise);
}
if (isBoundedVision()) {
// TODO Limited angle vision
// if robot can see within 60 degree angle w.r.t its orientation
if (dist <= laserRange) {
z[i] = dist;
}
} else {
z[i] = dist;
}
}
return z;
}
Method to calculate the probability of this particle
@Override
public double measurement_prob(double[] measurements) {
double prob = 1.0;
int c = 0;
double[] myMeasurements = sense(false);
for (int j = 0; j < measurements.length; j++) {
if (measurements[j] != 0) {
prob *= Util.gaussian(myMeasurements[j], sense_noise, measurements[j]);
c++;
}
}
if (isBoundedVision()) {
if (c > 0) {
// increase the probability if this particle can see more landmarks
prob = Math.pow(prob, 1.0 / c);
} else {
prob = 0.0;
}
}
return prob;
}
Coordinates are relative to the robot and for distance calculation I am using the Euclidean Distance method and my Robot gets localized correctly.
|
I was considering using a Raspberry Pi controlling a USB Relay to dim 12V LED lights, but I'm having trouble finding a solution that isn't a simple ON/OFF. What type of device would I need for dimming?
|
I am interested in using this miniature motor Squiggle Micro Motor to create very tiny horizontal movements. However, due to very limited space, I can only place it vertically within my project.
Assuming this motor is placed as follows, how can one adapt it to simultaneous movement at a right angle? (Ideally with the X-axis movement matched to the Y-axis movement as well as possible.)
|
I have no problem in reading circuit schemes, connecting wires, resistors etc. (basing on for example instructables.com), but I've only tried to learn Java (a while ago) and it's difficult to me to find out what's going on with all this C-code-stuff.
Are there any tutorials that are focused on the programming part?
thanks
|
Many robot applications (actually - the most practical and appealing ones) include the robot's reaction to (and impact on) the evironment, e.g. due to stochastic nature of the environment (e.g. when robot should handle non-rigid objects like clothes) or due to the variability of environment (e.g. harvesting robots should be prepared to pick fruits of different sizes and shapes).
The question is - is there a robotics simulator that can simulate not only robot but also the environment as well? E.g. that can simulate the response of robots action on cloth folding or fruit picking and so on. I guess that such simulator is really non-trivial but maybe there is some ongoing project for it?
|
Hy, I just found useful to post my idea here.
I've seen videos about automated quadcopters: http://www.ted.com/talks/raffaello_d_andrea_the_astounding_athletic_power_of_quadcopters.html and http://www.ted.com/talks/vijay_kumar_robots_that_fly_and_cooperate.html.
I surfed pages from the companies presenting this research and other information on the internet, but I haven't found why they use quadcopters specifically.
I understand, how accelerating, rotating and rolling works in those quadcopters - it's simple, but they claim that quadcopters have minimum number of working parts to fulfill their needs, which I don't agree and I think that tricopters are better in this (duocopters can't rotate horizontally, but tricopters can by inclining and then powering the remaining left or right propeller).
I rode forums, calculated and draw drafts of both tri and quad and found, that tri is much more efficient just in everything than quad with same props and battery when taken in account that best properties has the smallest copter with the largest props so:
3:4 moving parts (no vectored yaw in tri), 9.5:16 size, building Y instead of X construction take far less material 1.5:2.82, lesser maximum power input 3:4, better power efficiency makes longer flight time and tricopters have also improved agility over quadcopters.
The only disadvantage I see is a bit complicated horizontal rotating in tricopter without vectored yaw, which can be problem in man controlled machines but easily solved by simple algorithms in automated machines -> it's not a real disadvantage, just a small work to be done. I was thinking about doing that in my bachelor thesis, but for now I am looking for your opinions, thanks!
EDIT: Maybe the torque is the problem, because on tricopters you can can have all 3 props in 1 direction or 2 in 1 direction and 1 in the opposite and it's symmetrical in neither way, but I'm not sure if this is the main problem...
|
If this has already been answered, by all means please point me to it.
I am in the process of building a quadcopter which I eventually plan to run autonomously by allowing it to track an object and take a video feed of it moving.
GPS is one of the options I've considered, basically:
GPS antena on moving object (person, car, bike, surfer)
GPS antena on quadcopter
Radio to transmit coordinates from moving object to quad copter
Some of the challenges I can foresee are
Line of sight for camera. How does the camera know exactly where to point?
Angle, how can I pre-program the quad to always record, say... 10m to the right of the moving object, or even better, program a set of angles to record from whilst keeping up with the object
GPS accuracy, what happens if the GPS lock is weak?
What are some of my other options? I saw this TED Talk where the quads are following a ball shaped sensor? I believe it uses Kinect cameras and lots of them which is not really an option for this challenge.
So I'm open to hearing some ideas before I start research and development of these features.
|
An answer to the question Why are quadcopters more common in robotics than other configurations? said:
You need 4 degrees of freedom to control yaw, pitch, roll and thrust.
Four props is therefore the minimum number of actuators required. Tricoptors require a servo to tilt one or more rotors which is more mechanically complicated.
In a comment, I asked:
How do you get pure yaw motion with a quadcoptor and if that's possible why won't this work with a tricoptor? I don't understand how can you get yaw motion with any system where all rotors are in a plane without first tilting and moving. I would have thought that the main difference between quadcopters and tricoptors would be the kinematic calculations would be more complex.
Another answer explained:
you get pure yaw in the following way:
North and South motors rotating the same speed but collectively at a higher (or lower) speed than East and West Motors which are also at the same speed.
This explains why it works with a quadcopter, but doesn't explain why it won't work with a tricopter.
Is it simply the fact that the asymmetry means that you can't imbalance the torque effects to provide yaw movement while still keeping the thrusts balanced to keep pitch and roll constant?
|
I am trying to launch a file from remote computer but I could not success. Actually I can connect to remote computer but I think the problem is with including a file from remote computer. In other words, I am looking for a machine tag for include. Here is the my code:
<launch>
<group >
<machine name="marvin-1" address="tek-marvin-1" user="blabla" password="blabla" env-loader="/home/blabla/.rosLaunchScript.sh"/>
<include file="$(find openni_launch_marvin)/launch/kinect_left.launch"/>
</group>
</launch>
|
For a DC motor with a stall current of 950 mA, what should the H-bridge's current rating be? What will happen if we use our H-bridge L293D whose max. output current is 600 mA?
|
I found a good explanation on how to remove accelerometer bias (when on flat table only one axis should show values, the other two should be 0). I've calculated S and B factors (page 3):
Record $B_x^{0g}$, $B_y^{0g}$, $B_z^{0g}$, $S_{xx}$, $S_{yy}$, and $S_{zz}$ in EEPROM or flash memory
and use these values in all subsequent calculations of acceleration to
get the corrected outputs.
I don't know how to incorporate these into the final calculation of accelerations. I guess the bias should be substracted from my sensor reading. What about sensitivities (S)?
|
For Halloween, I'd like to build a spider that drops from the ceiling when it detects motion and then rewinds itself back up to scare the next kids. I already have a lightweight foamish spider from Hobby Lobby that I'd like to use, but I need help adding the smarts to it to scare the kids.
Ideally, it'd be able to detect how tall/far away the kid is to drop to a custom height each time, but I'd settle for a standard dropping height if that's too much. I even had an idea of having a motion sensor and having it shoot Silly String webs in front of people.
I have a very technical background, but I'm a total noob when it comes to robotics, so ideas as to what components and considerations I'd need would be greatly appreciated!
|
I've already asked a related question (accelerometer bias removal) here on robotics and got a bit better results on corrected accelerometer output. To get even better results I found the calibration equations (7th & 8th paragraph) from Vectornav which are just a bit enhanced than the solution in the linked question:
However, six more variables are needed:
Sensitivity of sensor X-axis to Y-axis inputs ($M_{xy}$)
Sensitivity of sensor X-axis to Z-axis inputs ($M_{xz}$)
Sensitivity of sensor Y-axis to X-axis inputs ($M_{yx}$)
Sensitivity of sensor Y-axis to Z-axis inputs ($M_{yz}$)
Sensitivity of sensor Z-axis to X-axis inputs ($M_{zx}$)
Sensitivity of sensor Z-axis to Y-axis inputs ($M_{zy}$)
Below it is also stated:
IEEE-STD-1293-1998 [...] provides a detailed test procedure for
determining each of these calibration parameters
However, after searching through the 1293-1998 standard (especially page 201 in Google Docs) I didn't find any clue on how to calculate the $M$ values. Also, $B_{d}$ and $V_x$ values from Vectornav equations is not explained anywhere. Can someone point me further?
|
I have a project which requires a robot to move around a room with a flat surface (concrete floor).
The robot must carry a laptop. I estimated that the total weight would be 6-7kg (including motors, battery, laptop, motor controller board and other mics items). I would like it to move at about the same speed as a Roomba moves. The robot will have two motors and a castor.
I have tried doing the calculation to determine the type of motor to use, but I'm very confused.
Can someone advise me on the type of motor and type of battery (Lipo/SLA) to use?
|
I'm starting to attempt to fly FPV on my quadrotor. I am using a FrSky D8R-II 2.4 GHz frequency hopping diversity receiver (two antennas) for control and recently added a no-name 910 MHz 1.5 watt analog video transmitter for FPV flying:
When the video transmitter is powered up, my control range drops from about 1.5km to under 50m. I'm surprised that the 910 MHz video channel affects my 2.4 GHz control channel this way.
Is this sort of interference expected? Is it because my transmitter is low quality? What changes are recommended — should I switch to a UHF control radio? Or a different frequency (eg 5.8 GHz?) for the video radio? Or just try moving it a few more inches (they are already about 5in apart)?
|
Our robot has a circular array of 12 sonar sensors that looks like this:
The sonar sensors themselves are pretty good. We use a low-pass filter to deal with noise, and the readings seem pretty accurate. However, when the robot comes across a flat surface like a wall, something weird happens. The sonars don't show readings that would indicate a wall, instead, it appears like a curved surface.
The plot below was made when the robot was facing a wall. See the curve in the blue lines, as compared to the straight red line. The red line was produced by using a camera to detect the wall, where the blue lines show filtered sonar readings.
We believe this error is due to crosstalk, where one sonar sensor's pulse bounces off the wall at an angle and is received by another sensor. This is a systematic error, so we can't really deal with it like we would with noise. Are there any solutions out there to correct for it?
|
I want to compute an existence probability of an object in a sensor fusion on the high level (having from each sensor list of objects already filtered with e.g. Kalman Filter).
There are these formulae:
$$LR(G)_{old} = \frac{p(Ex_{out})_{old}}{1 - p(Ex_{out})_{old}}$$
$$\alpha = \frac{p(Ex_{in})_{old}}{p(Ex_{in})_{old}*(1 - p(Ex_{in})_{new})}$$
$$LR(G)_{new} = LR(G)_{old} * \alpha$$
$$p(Ex_{out})_{new} = \frac{LR(G)_{new}}{1 + LR(G)_{new}}$$
Where $p(Ex)$ is the probability of existence and $LR$ is the Likelihood Ratio.
The idea is that $p(Ex)_{in}$ is some probability existence of local object, which was fused into the global $p(Ex_{out})$, and its probability influences that global one. $old$ would mean values from previous cycle.
How do you condition that computation to avoid situations of dividing by zero, obtaining NaN, or Inf? Also, if $p(Ex_{in})_{new}$ is almost 1, then $\alpha$ will be huge, increasing output, and increasing it enormously in each later cycle, so that the object will live forever. How to prevent it?
|
What is the name for the transfer function, GH, in a simple feedback control system like
$$y=\frac{G}{1+GH}u$$
What do you call G? What about (G/(1+GH))?
I am confused by the fact that that "open-loop transfer function" and "Loop transfer function" are used to mean GH by different people. Which is academically correct or widely accepted?
There are several terms: "closed-loop transfer function"
"open-loop transfer function"
"overall transfer function"
"Loop transfer function"
"loop gain"
"loop ratio"
Thanks
|
I'm quite new to mechanical engineering and not familiar with gears and motors. A few days ago, I bought a second-hand GWS servo motor for my project, and it didn't include gears.
Can someone help me understand the correct measurement of my motor so I can buy the correct gear to fit on my servo motor. Specifically, the measurement of the gear hole:
|
I am software engineering student and don't know much about hardware.
I recently have started a project in my AI course. The project is playing a 3x3 tic-tac-toe game between computer and a human.
In a tic-tac-toe board, suppose you play first and you put a cross mark in a certain place.(in tic-tac-toe board your position is (3,1)).
Now my computer will take a picture with webcam and analyse this picture with help of Opencv(It is a open source c++ library for image processing. for details opencv.org). After finding the position of your cross mark it will find the optimal position with the help of my algorithm at which it will put a circle. For output it will just speak out the position.
Now, I want to make a robotic hand that can draw this circle as output. Would anybody please help me to find the hardware and other materials that needed to make that robotic hand? It will be very helpful if anybody suggest me some tutorials.
I Googled and found many many suggestions, but I am confused about what to choose.
|
I want to give my robot a differential mechanism for the system of turning and steering. Considering the case of turning a right-angled corner, the robot will achieve this by following a gradual circular arc through the intersection while maintaining a steady speed. To accomplish this end, we increase the speed of the outer wheel while slowing that of the inner. But supposing i want the turn to be within a definite radius, how do i calculate what ratio the 2 speeds have to be in? Can someone give me an insight into this?
What Ive done is this, although I have my doubts.
If the speed of the right wheel is $V_r$ and the speed of the left wheel is $V_l$, then the ratio of their speeds while turning will be equal to the ratio of the circumferences of their corresponding quadrants.
Therefore
$$V_r :V_l =\frac{r+A}{r}$$
Is this right? I have a sinister feeling Im missing something out..
|
I am thinking of creating a robot that can navigate using a map. It is controlled from a PC. An 8-bit controller performs low level tasks and the PC is doing the image processing. I plan to implement it in a single room where the robot is placed and the robot and environment are tracked by a camera from a height or from the ceiling of the room. First, the robot needs to be mapped, like this http://www.societyofrobots.com/programming_wavefront.shtml
To do:
Track the robot from some height using a camera Following the
Wavefront algorithm to locate the robot and obstacles.
Procedure:(just my idea)
The camera will give an image of the robot surrounded by obstacles in random places.
using some OpenCV technique draws some grind over the image.
Locating the grid which contains robot (by having some colored symbol over the robot) and locating the grids containing the obstacle.
Now, the grids with an obstacle are thought of as wall and the remaining is the free space for the robot to navigate.
The robot is going to get the goal place which should be reached is given from the pc(maybe like point the place to reach in the image by mouse click).
Unknowns :
Mapping the room and locating the robot
How to do that? The robot should know where it is in the map or the image. We cannot believe only the camera is enough to locate the robot. So I thought of adding triangulation mapping like placing two IRs in the room and a receiver in the robot.
The doubt I have in this is how an IR receiver can know from which direction it is receiving the IR signal (from left or right ). I think it knows only that it receives IR, not the direction. Then how is the triangulation going to happen if I don't know the angle and direction?
coming to the image processing, how can I implement the Wavefront
algorithm (that captures the live video and draw grids over it to
find the robot and the obstacles)?
I have HC-05 Bluetooth module, Arduino, Bluetooth dongle, chassis with dc motors and driver, and a dc supply.
|
I'm moving from controlling a robot arm with basic time based motors controlled using a raspberry pi and python to a more advanced robotic arm that has servos (e.g HS-425BB). With the time based motors I must constantly keep track of the arms position (guess work) and make sure it doesn't over turn.
Do servos automatically stop if you give them a position that is outside of their boundaries rather than grinding the gears?
|
I'm a newbie to Electronics/ Robotics, but I love to do it as a hobby.
So I want to build a circuit (a really small in size would be much better) with a motion sensor that can communicate data (Basically when it sense a motion send a signal) to my computer over Wifi. Is this something possible?
If so how do I do it, may be a schematics diagram, or someway to start the project would be a grate help.
Thank you!!
|
I am building a 2 wheel robot to carry 7KG of load. I used this link to get the torque required for each motor which is 11Kg-cm. So I choose 2 of these motor which has 36Kg-cm(2x more) torque. I noticed that the stall current is 14A.
Can I use Adafruit Motor Shield for Arduino which is rated 3A peak current capability?
If not what current rating driver should look into?
Thanks!!
|
We are using ArduIMU (V3) as our Quadrotor's inertial measurement unit. (we have a separate board to control all motors, not with ArduIMU itself).
Now we have a problem with ArduIMU's sensors output. When we put our quadrotor steady on the ground with motors on, instead of getting 0 degree in roll and pitch we have a noisy output something like the image below( -6 to 6 degree error ):
delta_t = 0.2s
We are sure that this isn't a mechanical problem, because we checked the mechanical joints and everything.
I should mention that with motors off everything is going well. Also we checked that if we vibrate the device slowly on yaw axis or any other axis, it still shows the noisy output.
We are using DCM filter inside ArduIMU, also we tested with Kalman filter but no difference.
We also tested FRI low-pass filter, results is good but there is about 3 seconds delay in the output.
We also checked that if we separate the ArduImu's power from our circuit, it still no difference.
What's the problem with ArduIMU and how we can get rid off this noisy output ?
Update:
We think that the problem with our PID controller is because of these noises ... Is this a true assumption ? We can't tune our PID parameters ( using Ziegler–Nichols method ) when we have noisy data. We tested Ziegler–Nichols method when we have low rate noises and we successfully tuned our PID but when noise appears we are unable to tune PIDs. Is there anyway for us for tuning our PID in such situation ? Is this problem is because of the noises or the PID itself can get rid of them ?
|
I was at a Robotics conference earlier today and one of the speakers mentioned robots not being able to function as well in a crowd because they can't single out audio like a person can.
Why can people single out audio so well? And what would it take for a robot to do the same?
I'm aware of Active Noise Reduction (ANR) like on Bose Aviation headset, but that is not what I'm talking about. I am thinking about the ability to take everything in but process only what you feel is important.
|
What is the minimum amount of power that a beaglebone needs to start up? This would be with no peripherals attached besides host usb. The getting started guide claims that it can run off of a computer's usb power, but makes no mention of how many amps are actually needed. I saw a mention of older kernels limiting current draw to .5 amps when working off of usb, although that was all I could find.
Could one start a BeagleBone Black off of .3 amps? If not, how many?
|
How would you motorize the joints in an Iron Man suit? You need something fairly shallow, I would think, probably dual servos sitting on either side of an elbow or knee joint or either side of your hips, but how do you get motorized action there without dramatically adding to the thickness of the joint?
Bicycle-style chain drives wouldn't work, I would think, since the length of the chain would need to vary depending on what position you're in for at least a lot of joints.
How would you motorize the joints?
|
I've been thinking about starting a quadcopter project, maybe building it from scratch. One of the main barriers-to-entry for me is the motors: it seems like most quadcopters use brushless motors. I have some experience with DC motors and using PWM signals to regulate speed, but no experience with brushless motors. As I understand it, brushless motors are more expensive than the typical DC motor I would use on a land robot, and they also require Electronic Speed Controllers (ESCs), which seem to make them (from my perspective) even more expensive and more complicated to use.
So, my question: what is it about brushless motors that make them useful in a quadcopter? Is it more torque, less weight, something to do with efficiency? And would it be significantly harder (or even possible) to achieve lift using DC motors instead?
|
I have some crude time based motors taken from a robot arm that we upgraded to proper servos. I want to be able to power a conveyor belt with one of them and I was wondering how I would go about the following setup:
A ball drops through a hole onto the conveyer belt hitting a lever switch on its way through. This switch triggers the motor to start. When the ball gets to the top of the belt and falls off it hits another lever switch that turns the motor off.
I could handle this logic by hooking it up to my raspberry pi and using python to start and stop the motor depending on which GPIO pin received input (top or bottom lever). Or I could use a single lever and set a constant time interval to stop the motor. I would prefer to use both to handle any change in scale/construction.
I was wondering however if this could be done with the breadboard alone, using logic gates or similar?
|
Whenever I try using openni_launch, it works normally, however, when I try viewing an image using the kinect's rgb or depth camera, or even recording a simple bagfile with data from the kinect, I am unable to see any picture and rosbag does not record any data, and after a few seconds of running image_view or rosbag record, I got this error:
terminate called after throwing an instance of 'openni_wrapper::OpenNIException'
what(): virtual void openni_wrapper::OpenNIDevice::startImageStream() @ /tmp/buildd/ros-groovy-openni-camera-1.8.8-0precise-20130418-2203/src/openni_device.cpp @ 224 : starting image stream failed. Reason: Xiron OS got an event timeout!
[camera_nodelet_manager-2] process has died [pid 3788, exit code -6, cmd /opt/ros/groovy/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/rosbotics/.ros/log/16b63744-e043-11e2-ac16-080027486aa8/camera_nodelet_manager-2.log].
log file: /home/rosbotics/.ros/log/16b63744-e043-11e2-ac16-080027486aa8/camera_nodelet_manager-2*.log
After searching around and trying various fixes, I figured it might be a problem with openni and started using freenect, however I encountered the same problems, I could not record any data using bagfiles or see any images from the kinect (using rviz or image_view)
Then someone asked me to use something completely unrelated, freenect-glview, however that too gave me a black screen.
lsusb shows that all 3 parts of the kinect are connected and I've been able to control the kinect's motor through ubuntu so I know that there is at least a connection established between both.
Additional Info:
I run ROS on Ubuntu using VirtualBox V.4.2.14 and Windows 7 with USB 2 ports
I am using ubuntu 12.04 and ROS-Groovy (all up to date)
I've had the exact same errors on my Mac OSX Lion
When I try using Rviz with the kinect, VirtualBox crashes all together
I would appreciate anyone's help on the matter.
|
Im designing a differential steering mechanism for my robot. Supposing my robot is going in a straight line and I want it to change it direction by a certain angle( $θ$ in the diagram). What should the velocity ratio be of the 2 wheels so that it gradually turns and starts moving along a line that is $θ$ degrees to the initial line of movement?
If there's any ambiguity in the question please take a look at my earlier question which is similar. How to design a differential steering mechanism?
|
I am a graduate student trying to make my own Line follower Robot for my minor assessment, I've all hardware parts and all data-sheets with me, I've attended a workshops of Robotics and studied a lot on Line follower robot. I have a good knowledge of C Programming and Embedded systems, but the problem is I've a very limited amount of time(2 days).
Please help me to suggest a good paper work about my Project - Line follower robot, where should I start from ? I am getting myself confused should I start from Programming or should I first do circuit simulations as I know It is not a better approach to use directly hardware.
Please suggest me a fine Paper work or some links/videos so that I can make my Robotics projects fast. Any help would be really appreciated, Thanks.
|
I have been looking online for a while, and I cannot seem to find any steppers without any torque ratings. (Operating torque, not holding torque.) I even looked on hobby sites that usually have all the ratings, including Adafruit and Sparkfun. I only have found one ever that said the operating torque, however, it didn't seem that reputable and it didn't have holding torque, so it might be likely that it's a mistake. I might contact them and ask.
Am I missing something? Can I calculate what it will run at with certain factors? (How long in between each step, etc.)
The reason that I say that is I found a tutorial saying how much torque (didn't specify which kind, but I kinda assume it isn't holding) you need for a CNC machine (what I'm building).
Equation (From this site):
Torque = ((weight)(inches/revolution))/2(pi)
Also on the page:
By the way, we are talking about torque during a continual turning motion, not at a holding position.
That seems like operating torque, but what makes it the most confusing is they sell steppers and they only list the holding.
What am I missing?
|
I need my robotic arm to ring a desk bell. I one on the maplins site usb robotic arm.
It does seem very slow. What can I hack on it to boost the downwards and upwards speed. I need it hit the bell tip/platform quickly once or twice.
This is purely a LOL project for work. Ever time we get an order we want the arm to ring the bell. :)
-EDIT
This is the gearbox assembly - And it much much to slow - What can i change in here to speed up one gearbox by at least 4 times?
The grabber gearbox is different though. The gear marker P7 is white and seems to move the grabbers at a faster speed.
|
On my stepper's datasheet, it has the category "rotor torque" (labeled in N-CM). What does that mean? Is this the torque it has can supply when turning? (Hopefully)
|
I have a APM 3DR Quad with a 3DR radio telemetry kit. I would like to send real-time sonar data to my laptop (running Windows 7) in order to manipulate it in an additional Arduino Sketch.
The sonar sensor is connected to an Analog In channel on my Arduino. That data is processed for altitude calculations, and I would like to send this altitude data to some sort of ground station on my computer through the use of a telemetry kit (2 3DR Radios: 1 on the quadcopter and 1 on my computer).
I am not quite sure how to go about this task. Is there a way that I can modify the source code (GCS.h or GCS_Mavlink.pde) in conjunction with Mission Planner Mav 1.0 ground station to do this? Or would I need to write a python module to accomplish this?
|
How do you calculate or update the position of a differential drive robot with incremental sensors?
There is one incremental sensor attatched to each of the two differential wheels. Both sensors determine the distance $\Delta left$ resp. $\Delta right$ their wheel has rolled during a known time $\Delta t$.
First, let's assume the center between both wheels marks the position of the robot. In this case, one could calculate the position as:
$$
x = \frac{x_{left}+x_{right}}{2} \\
y = \frac{y_{left}+y_{right}}{2}
$$
"Deriving" those equations under the assumption that both wheels rolled in a straight line (which should be approximately correct for small distances) I get:
$$
\frac{\Delta x}{\Delta t} = \frac{1}{2}\left( \frac{\Delta left}{\Delta t} + \frac{\Delta right}{\Delta t}\right)cos(\theta) \\
\frac{\Delta y}{\Delta t} = \frac{1}{2}\left( \frac{\Delta left}{\Delta t} + \frac{\Delta right}{\Delta t}\right)sin(\theta)
$$
Where $\theta$ is the angle of orientation of the robot. For the change of this angle I found the equation
$$
\frac{\Delta \theta}{\Delta t} = \frac{1}{w} \left( \frac{\Delta left}{\Delta t} - \frac{\Delta right}{\Delta t}\right)
$$
Where $w$ is the distance between both wheels.
Because $\Delta x$ and $\Delta y$ depend on $\theta$, I wonder whether I should first calculate the new $\theta$ by adding $\Delta \theta$ or if I should rather use the "old" $\theta$ ? Is there any reason to use one over the other?
Then, let's now assume the center between both wheels does not mark the position of the robot. Instead I want to use a point which marks the geometric center of the robot's bounding box. Then $x$ and $y$ change to:
$$
x = \frac{x_{left}+x_{right}}{2} + l\, cos(\theta)\\
y = \frac{y_{left}+y_{right}}{2} + l\, sin(\theta)
$$
"Deriving" the first gives:
$$
\frac{\Delta x}{\Delta t} = \frac{1}{2}\left( \frac{\Delta left}{\Delta t} + \frac{\Delta right}{\Delta t}\right)cos(\theta) - l\,sin(\theta)\,\frac{\Delta \theta}{\Delta t}
$$
Now there is a dependance on $\Delta \theta$. Is this a reason to use the "new" $\theta$ ?
Is there any better method to do simulatenous update of position and orientation? May be using complex numbers (same approach as with quaternions in 3D?) or homogeneous coordinates?
|
What is the difference between a Robot and a Machine? At what point does a machine begin to be called a robot?
Is it at a certain level of complexity? Is it when it has software etc?.
For instance: A desktop printer has mechanics, electronics and firmware but it is not considered a robot (or is it). A Roomba has the same stuff but we call it a robot. So what is the difference.
I have always believed that a robot is a robot when it takes input from it's environment and uses it to make decisions on how to affect it's environment; i.e. a robot has a feedback loop.
|
I would like to know if there are any other solutions to implement slip compensation into a Half-Size Micromouse other than the conventional method. I have spoken to a few Japanese competitors, and they told me that the only solution they have to such a problem is creating a table of predetermined values and using these values to increase or decrease the before turn/after turn distances. The values used are determined by the Mouse's intelligence. Due to the fact that this method has too many limitations, I would like to hear more suggestions from people who are familiar with this matter.
|
I am working on a homemade vending machine project that serves milk and cookies, using arduino and some basic servos and stuff.
The problem is: I really have no clue on how to protect the milk to last long, or how to even know if the milk is still ok to drink.. All I really know is that air is bad for the milk (and the cookies), so here is what I came up with:
Two solenoids that activates at the same time, to allow air in, and milk out. All of this should be inside a "slightly" colder place.
I'm sure this design might sound stupid to some of you, but this is where I need your help please, do you think this design can work ? (Would that solenoid on top make any difference to protect milk?) How to improve it to make the milk last as long as possible ?
I'v heard about the big guys making machines that keep milk fresh for weeks even months, while i'm probably sure my milk won't stand a couple of hours..
Any idea or any information, link, or clue would be greatly appreciated. Thank you.
|
I have a box (cuboid) lying on floor or table. So there are 6 surfaces of the box and 1 surface of the floor. If I take each pair of surface such that the surfaces are "adjacent" to each other, I get two kind of pairings:
1) two surfaces of the box: the surface normals of the surfaces diverge from each other.
2) 1 surface of the box + surface of the floor : the surface normals converge and intersect at an angle of 90 degrees. ( 8o to 100 degrees, if we want to add some tolerance).
I want to distinguish these two cases by representing through a function? What function can distinguish between these two situations?
In both cases, the the normalized dot product of the surface normals is 0, since the angle b/w them is 90 degrees. So this is not the right solution...
|
I’m trying to inject some kind of rubber around an aluminum strut to form “feet” for a robot. I’ve already milled the mold, but I’m having trouble finding an inexpensive and readily available rubber compound that will cure without exposure to air. Ideally it should cure to about the consistency of a silicone O-ring. I’ve tried silicone gasket-maker (the automotive stuff), however a week later it hasn’t cured in the mold, as there is no exposure to the air. Is there anything out there with a similar consistency to silicone, but doesn’t require air to cure? Or is there a way to get what I’m currently using to set up without waiting a millennium? There aren’t any real mechanical requirements, I’m just trying to clean up the look of the robot and prevent its legs from scratching my table.
|
I'm trying to control the speed of this motor, with this motor driver and pic16f690.
pwm in my program
5 kHz frequency (0.2 ms period)
Starting with 50% duty cycle the motor doesn't run.
But moving from say, 80% to 50% (i.e. program my PIC with 80% duty cycle, and then re-program it with 50%), the motor will run at 50% (of course at a lower speed). I consider this funny. Anyone to explain this?
my motor powered by 5V.
|
I made a small crawler robot a little while ago that had two legs with two degrees of freedom each, so 4 RC servos total. While I was programming the movement of the legs I noticed that they moved rather stiffly. It makes sense that the RC servo's internal controller would have a very quick response to position commands, but I wanted my crawler to move in a way that seems a little more smooth and life-like.
My solution was create a cubic function of time that describes the path of the servos, and then set their position in small time increments, resulting in more smooth motion. Essentially what I did was solve for the $a_i$ coefficients in a cubic equation using the time interval, starting and ending position of the servo, and starting and ending rates the servo should move (which is just the derivative of the position):
Solve for $a_0$, $a_1$, $a_2$, and $a_3$:
$$ position(t) = a_0 + a_1t + a_2t^2 + a_3t^3 $$
$$ rate(t) = position'(t) = a_1 + 2a_2t + 3a_3t^2 $$
Given: $position(0)$, $position(t_f)$, $rate(0)$, $rate(t_f)$
I set the rate of the servo between a pair of movements to be zero if the movements were in opposite directions, and positive or negative if the movements were both in the positive or negative direction, respectively.
This worked pretty well, but this solution is limited in a few ways. For one, it's difficult to decide what exactly the rates between movements that go in the same direction should be. I used the average of the slopes ahead and behind of a particular position between movements, but it isn't clear to me that is optimal. Second of all, cubic curves could take the servo to a position outside of the range of the positions at the beginning and end of a movement, which may be undesirable. For example, at some point during the time interval, the curve could cause the servo to go beyond the second position, or below the first position. Thirdly, curve generation here does not consider the maximum rate that the servo can turn, so a curve may have the servo move at a speed that is unrealistic. With that, a minor concern is that the maximum turning rate depends on the response of servo's internal controller, and may change depending on the size of the position interval.
Neglecting that last concern, these issues may be solved by increasing the degree of the polynomial and adding constraints to solve for the coefficients, but I'm now starting to wonder...
Is there a better way than this to make servo movement smooth and seem more life-like?
|
I am working on a robotics project with C++ (drawing signs on board), on CRS CataLyst5 arm.
I have faced a problem:
I have many methods move in different directions, goToLocalizations, etc, but the problem is that when I run many of them in main without Sleep() function between each function they does not run properly. I think that the first one needs time (the time of robot movement) but when I put Sleep(10000) between them (I guessed that 10 seconds are enough for the movement) all is ok. This is very ineffective and slow solution. Would you like to give me some solutions to avoid the use of Sleep ?
|
I am in the concept phase of a driving robot. The two wheels on the front axle will be powered, while the rear will be dragged along. The rear is also responsible for steering but this has noting to do with my question.
Since the robot is required to make relatively sharp turns at high speed. Therefore I have two options to compensate the different speeds on both sides. On the one hand, a differential gear in the front axle could be used. It would be powered by one motor then. On the other hand, I could simply use two motors directly powering each a front wheel. This way I could simulate the differentiation in software.
I'd like to go for the first approach, using the hardware differential. But I have the one concern with it. Would a robot vehicle with differential gear still move straight, without explicit steering applied?
My imagination is that, with those wheels not being solidly connected, the robot would move in random curves which I'd have to compensate with a lot of steering then. I know that for real cars, differential gears are standard and do their work, but now I am talking about a small robot measuring about 6 inches.
|
I've got an arm attached to a shaft. The arms dimensions are 40x5 inches the arm weights about 10 lbs.
If I have a wind acting on the side of the arm, how would I translate the wind force into torque on the shaft?
To give some more information, I'm rotating the arm using a stepper motor, and I would like to know how to size the motor depending environmental conditions.
What should my formula look like in order to arrive a required oz-in of torque given my requirements being:
I need to be able to accelerate the arm from 0 to 12 rpm in 1.5 seconds
The wind speed can be as high as 30 mph
Using the formula P = .00256 x 30^2 i find the wind pressure per square foot being 2.304
Using the formula F = A x P x Cd for calculating force, I get 1.389 x 2.304 x 2 = 6.4
So I know that the wind force on my arm is 6.4 lbs. But now how do I translate this to torque on my arm?
Source: http://k7nv.com/notebook/topics/windload.html
|
I'm going to be embarking on an autonomous robot project and I was going to be using GPS to navigate to waypoints (I'm aware of the margin of error when it comes to localization with GPD but I live in a lovely area with many open fields).
I was going to use Adafruit's Ultimate GPS Breakout board with my RaspberryPi, and I was wondering how I should protect or mount the GPS to protect it from the elements. Do all GPS units need to be face up and unobstructed (ex. wood or plastic) in order to work? If so, how can I still protect a GPS unit from the outdoors?
|
For the Dagu Wild Thumper 6 Wheeled platform, or any multiple motor system, do I really need 1 battery for each motor? Or should I just buy 2 for either side of the platform. In addition, for larger motors like the ones on this platform, how do I deal with the power generated from a coasting motor?
I want to jump into the deep end with robotics, as I already hold all the programming skills, and I realize a platform of this magnitude may be a difficult endeavor.
Recommended motor voltage is 2 – 7.5 Volts, so should one use two 22 Volt batteries for the left and right side, or six 7.5 volt batteries?
|
I'm in the process of writing my own simple quadcopter controller for experimental use, and I'm having trouble getting my head around how to convert from the degrees which my PID controller demands to an appropriate 1k-2k range for PWM output. For example, take the roll axis on a '+' configured 'copter (pseudo-code):
setpoint = scaleToRange(receiver.rollValue, -30, 30); //scale the command (1000-2000) to between -30 and 30 degrees, as that's the maximum roll permitted.
demandedRoll = rollPID.calculate(setpoint, imu.currentRoll, PID_params);
/// the part I'm having trouble with
motorLeft_command = receiver.throttle - rollPWM;
motorRight_command = receiver.throttle + rollPWM;
How do I take the roll demanded by my PID controller and convert it to a value useful to the motors, that is to say, where does rollPWM come from? My first instinct is to use a simple linear relationship, i.e.:
rollPWM = scaleToRange(demandedRoll, MinValue=receiver.throttle/2, MaxValue=2000-receiver.throttle);
//don't let it go beyond 50% of throttle on low end, and the ESC's max on the high end.
However this seems far too simplistic to work.
Or should I be doing more calculations before everything goes through PID control? Any help would be great.
|
I bought this MPU-6050: link
According to the manufacture site, the sensor logic level is 3.3V (though the ebay page says Power supply :3-5v)
Should I use a 4 channel Bi-Directional Logic Level Converter (like this one) for the SDA, SCL, INT channels? or can I connect it directly to my arduino nano?
I saw some places that says I should use it with a logic level converter and some who say it's ok without it. (I guess it depends on the sensor board, so please take a look, link above)
Current Setup:
SDA <-> LLC <-> A4
SCL <-> LLC <-> A5
INT <-> LLC <-> D2
VCC <- LLC <- 5V (arduino)
GND <- LLC <- GND (arduino)
I still don't have the parts so I can't test it, and I'm probably going to use Jeff Rowberg library to communicate with the sensor (I2C)
|
I have a USB webcam and a WiFi module which it can convert Serial data to WiFi and vice versa.
The question is can I simply convert the data coming from the webcam to serial with a USB to Serial IC (like FT232R ) and then hand it over to my WiFi Module?
Update:
The WiFi module DataSheet is here
|
Consider a differential drive robot that has two motorized wheels with an encoder attached to each for feedback. Supposed there is a function for each DC motor that takes a float from -1 to 1 and sets the PWM signals to provide a proportional amount of power to that motor. Unfortunately, not all motors are created equal, so sending each motor the same PWM signal makes the robot veer left or right. I'm trying to think about how to drive the robot straight using the encoders attached to each motor as input to a PID loop.
Here's how I would do it: I would take the difference between the left and right encoders, bound the error between some range, normalize it to be from [-1, 1], and then map it to the motor powers 0 to 1. So if I and D were zero, and we get an error of 1 (so the left motor has turned much more than the right motor), then left motor would be set to 0, and the right motor set to 1 (causing a hard left).
Are there any issues with this? What is a better approach?
|
I am working on a robot with focus on speed. At the moment I am looking for a suitable motor but it would help if I understood the difference between the various options.
To provide some background, I have not worked with RC model components before, but I think this is the only place for me to find the components needed for my robot, such as the motor.
I have already figured out how much power the motor needs to accelerate my robot as desired, taking energy conversion efficiency and tractional resistance into account. It's about 170 watts, depending on the final weight.
To limit my search further, I need to decide on either using a RC car motor or a RC helicopter motor now, but I don't understand the difference between these options.
Focussing on brushless motors (if that matters), what are the differences between RC car and RC helicopter motors? What needs to be taken into account when choosing between them?
|
I know that the Complementary Filter has the functions of both LPF and HPF. But I think my understanding on the principal behind it is still unclear.
I am quite new on digital signal processing, and maybe some very fundamental explanations will help a lot.
Say I have a Complementary Filter as follows:
$$y =a\cdot y+(1-a)\cdot x$$
Then my parameter $a$ may be calculated by $$a=\frac{\text{time constant}}{\text{time constant}+\text{sample period}}$$
where the $\text{sample period}$ is simply the reciprocal of the $\text{sampling frequency}$.
The $\text{time constant}$ seems to be at my own choice.
My Questions:
What is the theory behind this calculation?
How do we choose the $\text{time constant}$ properly?
Note: I also posted this question on Stack Overflow, as the answers there are likely to be slightly different in emphasis.
|
Need a way to dispense micro liter amounts of water (lets say 1-10ul). Only thing I've found is piezoelectric dispensers and they are >$100. Any suggestions?
I can build, but preferably would be an off-the-shelf component.
|
I'm working on a robotics project, and I am using grayscale sensors to automatically follow a black line: turning 90 degrees, going round in a circle, and passing through gaps in the lines etc. I was wondering what is an effective way to detect the colours and move it through the lines, with five or six grayscale sensors.
Thank you very much.
|
I'm doing some groundwork for a project, and I have a question about the current state of SLAM techniques.
When a SLAM-equipped device detects an object, that object's position is stored. If you look at the point cloud the device is generating, you'll see points for this object, and models generated from it will include geometry here.
If an object is placed in a previously-empty space, it is detected, and points are added. Subsequent models will feature geometry describing this new object.
How does the device react if that object is removed? As far as I've seen, SLAM systems will tend to leave the points in place, resulting in "ghost" geometry. There are algorithms that will disregard lone points caused by transient contacts, but objects that remained long enough to build up a solid model will remain in the device's memory. Are there any systems that are capable of detecting that previously-occupied space is now empty?
|
I'm trying to put together a simple simulation for a delta robot and I'd like to use forward kinematics (direct kinematics) to compute the end effector's position in space by passing 3 angles.
I've started with the Trossen Robotics Forum Delta Robot Tutorial and I can understand most of the math, but not all. I'm lost at the last part in forward kinematics, when trying to compute the point where the 3 sphere's intersect. I've looked at spherical coordinates in general but couldn't work out the two angles used to find to rotate towards (to E(x,y,z)).
I see they're solving the equation of a sphere, but that's where I get lost.
Can someone please 'dumb it down' for me ?
Also, I've used the example code to do a quick visualization using Processing,
but the last part seems wrong. The lower leg changes length and it shouldn't:
//Rhino measurements in cm
final float e = 21;//end effector side
final float f = 60.33;//base side
final float rf = 67.5;//upper leg length - radius of upper sphere
final float re = 95;//lower leg length - redius of lower sphere (with offset will join in E(x,y,z))
final float sqrt3 = sqrt(3.0);
final float sin120 = sqrt3/2.0;
final float cos120 = -0.5;
final float tan60 = sqrt3;
final float sin30 = 0.5;
final float tan30 = 1/sqrt3;
final float a120 = TWO_PI/3;
final float a60 = TWO_PI/6;
//bounds
final float minX = -200;
final float maxX = 200;
final float minY = -200;
final float maxY = 200;
final float minZ = -200;
final float maxZ = -10;
final float maxT = 54;
final float minT = -21;
float xp = 0;
float yp = 0;
float zp =-45;
float t1 = 0;//theta
float t2 = 0;
float t3 = 0;
float prevX;
float prevY;
float prevZ;
float prevT1;
float prevT2;
float prevT3;
boolean validPosition;
//cheap arcball
PVector offset,cameraRotation = new PVector(),cameraTargetRotation = new PVector();
void setup() {
size(900,600,P3D);
}
void draw() {
background(192);
pushMatrix();
translate(width * .5,height * .5,300);
//rotateY(map(mouseX,0,width,-PI,PI));
if (mousePressed && (mouseX > 300)){
cameraTargetRotation.x += -float(mouseY-pmouseY);
cameraTargetRotation.y += float(mouseX-pmouseX);
}
rotateX(radians(cameraRotation.x -= (cameraRotation.x - cameraTargetRotation.x) * .35));
rotateY(radians(cameraRotation.y -= (cameraRotation.y - cameraTargetRotation.y) * .35));
stroke(0);
et(f,color(255));
drawPoint(new PVector(),2,color(255,0,255));
float[] t = new float[]{t1,t2,t3};
for(int i = 0 ; i < 3; i++){
float a = HALF_PI+(radians(120)*i);
float r1 = f / 1.25 * tan(radians(30));
float r2 = e / 1.25 * tan(radians(30));
PVector F = new PVector(cos(a) * r1,sin(a) * r1,0);
PVector E = new PVector(cos(a) * r2,sin(a) * r2,0);
E.add(xp,yp,zp);
//J = F * rxMat
PMatrix3D m = new PMatrix3D();
m.translate(F.x,F.y,F.z);
m.rotateZ(a);
m.rotateY(radians(t[i]));
m.translate(rf,0,0);
PVector J = new PVector();
m.mult(new PVector(),J);
line(F.x,F.y,F.z,J.x,J.y,J.z);
line(E.x,E.y,E.z,J.x,J.y,J.z);
drawPoint(F,2,color(255,0,0));
drawPoint(J,2,color(255,255,0));
drawPoint(E,2,color(0,255,0));
//println(dist(F.x,F.y,F.z,J.x,J.y,J.z)+"\t"+rf);
println(dist(E.x,E.y,E.z,J.x,J.y,J.z)+"\t"+re);//length should not change
}
pushMatrix();
translate(xp,yp,zp);
drawPoint(new PVector(),2,color(0,255,255));
et(e,color(255));
popMatrix();
popMatrix();
}
void drawPoint(PVector p,float s,color c){
pushMatrix();
translate(p.x,p.y,p.z);
fill(c);
box(s);
popMatrix();
}
void et(float r,color c){//draw equilateral triangle, r is radius ( median), c is colour
pushMatrix();
rotateZ(-HALF_PI);
fill(c);
beginShape();
for(int i = 0 ; i < 3; i++)
vertex(cos(a120*i) * r,sin(a120*i) * r,0);
endShape(CLOSE);
popMatrix();
}
void keyPressed(){
float amt = 3;
if(key == 'q') t1 -= amt;
if(key == 'Q') t1 += amt;
if(key == 'w') t2 -= amt;
if(key == 'W') t2 += amt;
if(key == 'e') t3 -= amt;
if(key == 'E') t3 += amt;
t1 = constrain(t1,minT,maxT);
t2 = constrain(t2,minT,maxT);
t3 = constrain(t3,minT,maxT);
dk();
}
void ik() {
if (xp < minX) { xp = minX; }
if (xp > maxX) { xp = maxX; }
if (yp < minX) { yp = minX; }
if (yp > maxX) { yp = maxX; }
if (zp < minZ) { zp = minZ; }
if (zp > maxZ) { zp = maxZ; }
validPosition = true;
//set the first angle
float theta1 = rotateYZ(xp, yp, zp);
if (theta1 != 999) {
float theta2 = rotateYZ(xp*cos120 + yp*sin120, yp*cos120-xp*sin120, zp); // rotate coords to +120 deg
if (theta2 != 999) {
float theta3 = rotateYZ(xp*cos120 - yp*sin120, yp*cos120+xp*sin120, zp); // rotate coords to -120 deg
if (theta3 != 999) {
//we succeeded - point exists
if (theta1 <= maxT && theta2 <= maxT && theta3 <= maxT && theta1 >= minT && theta2 >= minT && theta3 >= minT ) { //bounds check
t1 = theta1;
t2 = theta2;
t3 = theta3;
} else {
validPosition = false;
}
} else {
validPosition = false;
}
} else {
validPosition = false;
}
} else {
validPosition = false;
}
//uh oh, we failed, revert to our last known good positions
if ( !validPosition ) {
xp = prevX;
yp = prevY;
zp = prevZ;
}
}
void dk() {
validPosition = true;
float t = (f-e)*tan30/2;
float dtr = PI/(float)180.0;
float theta1 = dtr*t1;
float theta2 = dtr*t2;
float theta3 = dtr*t3;
float y1 = -(t + rf*cos(theta1));
float z1 = -rf*sin(theta1);
float y2 = (t + rf*cos(theta2))*sin30;
float x2 = y2*tan60;
float z2 = -rf*sin(theta2);
float y3 = (t + rf*cos(theta3))*sin30;
float x3 = -y3*tan60;
float z3 = -rf*sin(theta3);
float dnm = (y2-y1)*x3-(y3-y1)*x2;
float w1 = y1*y1 + z1*z1;
float w2 = x2*x2 + y2*y2 + z2*z2;
float w3 = x3*x3 + y3*y3 + z3*z3;
// x = (a1*z + b1)/dnm
float a1 = (z2-z1)*(y3-y1)-(z3-z1)*(y2-y1);
float b1 = -((w2-w1)*(y3-y1)-(w3-w1)*(y2-y1))/2.0;
// y = (a2*z + b2)/dnm;
float a2 = -(z2-z1)*x3+(z3-z1)*x2;
float b2 = ((w2-w1)*x3 - (w3-w1)*x2)/2.0;
// a*z^2 + b*z + c = 0
float a = a1*a1 + a2*a2 + dnm*dnm;
float b = 2*(a1*b1 + a2*(b2-y1*dnm) - z1*dnm*dnm);
float c = (b2-y1*dnm)*(b2-y1*dnm) + b1*b1 + dnm*dnm*(z1*z1 - re*re);
// discriminant
float d = b*b - (float)4.0*a*c;
if (d < 0) { validPosition = false; }
zp = -(float)0.5*(b+sqrt(d))/a;
xp = (a1*zp + b1)/dnm;
yp = (a2*zp + b2)/dnm;
if (xp >= minX && xp <= maxX&& yp >= minX && yp <= maxX && zp >= minZ & zp <= maxZ) { //bounds check
} else {
validPosition = false;
}
if ( !validPosition ) {
xp = prevX;
yp = prevY;
zp = prevZ;
t1 = prevT1;
t2 = prevT2;
t3 = prevT3;
}
}
void storePrev() {
prevX = xp;
prevY = yp;
prevZ = zp;
prevT1 = t1;
prevT2 = t2;
prevT3 = t3;
}
float rotateYZ(float x0, float y0, float z0) {
float y1 = -0.5 * 0.57735 * f; // f/2 * tg 30
y0 -= 0.5 * 0.57735 * e; // shift center to edge
// z = a + b*y
float a = (x0*x0 + y0*y0 + z0*z0 +rf*rf - re*re - y1*y1)/(2*z0);
float b = (y1-y0)/z0;
// discriminant
float d = -(a+b*y1)*(a+b*y1)+rf*(b*b*rf+rf);
if (d < 0) return 999; // non-existing point
float yj = (y1 - a*b - sqrt(d))/(b*b + 1); // choosing outer point
float zj = a + b*yj;
return 180.0*atan(-zj/(y1 - yj))/PI + ((yj>y1)?180.0:0.0);
}
|
I want to use my atmega8 uC as a h-bridge.
Can anybody give me the source code using C, so that the microcontroller acts as an H-Bridge.
|
I am using SIM900A for some purpose and want to know the number of the sender from where a message comes. I am unable to find the specific AT command related to receiving message which give me number from where latest message comes.
I had used AT+CNMI (it corresponds to notification regarding latest received message), but am unable to find sender number.
I had seen AT+CMGL=<stat>[,<mode>] will give you a string which will have oa i.e. originating address and once that is stored in a string I can easily parse it out, but when I had data format of that string. Need help or any suggestion if somebody can help me out with any other possible solution.
|
I am planning to build a robot.
1) What free or low cost robot modelling tools exist.
|
I have been studying about building a tricopter. But I couldn't find the design calculations or mathematical modeling of the tricopter any where over the internet.
What are the mathematical relationships or equations of motion and forces in tricopter? How do I calculate the requirements of the structural design and the energy requirements of the motors?
|
This might be a out of league question and may seems to be very odd.I am using multiple Arduino UNO boards over network and want to assign a GUID and Serial number to each board so that when ever they send any data to a central server, server is able to find which Device it is if we have assign name for each device.
first way to do this is to assign GUID and serial number of device before each message that is send to central server manually while programming and then burn that hex to arduino.
Now is there any way out that we can burn a program that always give a output as a string (GUID+Serial number of device) like we program EEPROM for this and then burn our main code in Arduino which pick the GUID+Serial ID combo from EEPROM and write it before every message which arduino is pushing to central server.
Or my another way of asking is can we program EEPROM with a different program and our Arduino separately like 2 files running in parallel or is it not possible?
Is there any other way of doing this?
|
I have several APM 2.5 boards and need to identify them based on some globally unique hardware signature that does not change with programming.
Arduinos and atmel AVR chips in general do not have (also this thread) an accessible serial number.
However, it seems that the Ardupilot has so many integrated sensors and other ICs that one of them must have something unique I can use ( see schematic )!
I will be checking datasheets for MPU-6000, HMC5883L-TR and MS5611, but in the meantime, if someone has already figured this one out, please answer.
|
I'm looking for a laser / photosensor pair (or product of similar function) for detecting when a beam is interrupted (no more than 3ft apart, probably more like 1ft).
I'd like these to run off of 5V, since I'm using an Arduino. My main requirement, however, is that these parts have nice housings, ideally with some mounting screw holes or something along those lines. This is going into a project where sturdiness and durability are important.
I don't know how to search for parts like what I am looking for. Could you please point me either to some good product sources, give me some better keywords for searching, or link me directly to potentially useful products?
|
We hope to build a simple line follower robot and we got a problem when we were discussing about PIC programming.
We planed to write a endless loop, check the sensor panel reading and do the relevant stuff for that reading.
But one of our friends told us to use a timer interrupt to generate interrupts in definite time periods and in each interrupt check the sensor panel reading and do the relevant stuff for that reading.
But we can't figure out which is best: the endless loop in main method OR timer interrupt method.
What is the best way, and why?
|
I used to think that the higher GPS antenna position the better until I read the following on GPSd FAQ:
One common error is to place the GPS or antenna as high as possible.
This will increase multipath effects due to signal bounce from the
ground or water, which can cause the GPS to mistake its position and
the time signal. The correct location for a boat GPS antenna is on the
gunwale rail or pushpit rail, close to the water and as far from the
mast as possible (to reduce signal bounce from the mast). If you're
outside or in a fixed location, put the GPS antenna as far from
buildings as possible, and on the ground.
If you're in a car, don't
put the GPS antenna on the roof, put it on the towbar or some similar
location. If you're driving in a heavily built up area, you're going
to get signal bounce off buildings and reduced accuracy. That's just
how the physics works. Note, however, that as your velocity goes up it
becomes easier for the convergence filters in your GPS to spot and
discard delayed signal, so multipath effects are proportionally less
important in fast-moving vehicles.
Does anyone has experience placing GPS antenna on a towbar of the car as suggested? Does it give reasonable effect?
My concern is that placing antenna there will not reduce an error that much, but will expose the device (antenna) to possible mechanical damage.
So, are there any better positions apart from roof and towbar?
Thanks
|
Background:
I am implementing a simple Kalman Filter that estimates the heading direction of a robot. The robot is equipped with a compass and a gyroscope.
My Understanding:
I am thinking about representing my state as a 2D vector $(x, \dot{x})$, where $x$ is the current heading direction and $\dot{x}$ is the rotation rate reported by the gyroscope.
Questions:
If my understanding is correct, there will be no control term, $u$ in my filter. Is it true? What if I take the state as a 1D vector $(x)$? Then does my $\dot{x}$becomes the control term $u$? Will these two methods yield different results?
As we know, the main noise source comes from the compass when the compass is in a distorted magnetic field. Here, I suppose the Gaussian noise is less significant. But the magnetic distortion is totally unpredictable. How do we model it in the Kalman Filter?
In Kalman Filter, is the assumption that "all the noises are white" necessary? Say, if my noise distribution is actually a Laplacian distribution, can I still use a Kalman Filter? Or I have to switch to another filter, like Extended Kalman Filter?
|
I am implementing a simple Kalman Filter that estimates the heading direction of a robot. The robot is equipped with a compass and a gyroscope.
Say at time $t-dt$, the compass reports a reading $\theta_{t-dt}$, and the gyroscope reports a reading $\omega_{t-dt}$. Then I assume from time $t-dt$ to $t$, the rotation rate can be regarded as a constant. Thus, my current heading direction is $$\theta_{t}=\theta_{t-dt}+\omega_{t-dt}\cdot dt$$
As can be seen, the $\theta$ can be easily time-updated.
But what about my $\omega$? The robot is not at my control. So its rotation rate at next moment is unpredictable.
How should I do the time update in this case?
|
In this paper, the author says that during SLAM process, pseudo segments that appear from any momentary pause of dynamic objects in laser data would make the map unsatisfied.
How is this caused?
If the dynamic object moved, won't laser data update and eliminate the segment of dynamic objects?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.