instruction
stringlengths
40
28.9k
I recently purchased at EY-80 from electrodragon: EY-80 All in one 9-Axis Motion Sensor (Gyro + Acceler + Magneto + Baro) I am having a hard time compiling the example code on my arduino: This is what is happening. So far, I am only copy and pasting the code. Any help? (I am somewhat new to programming, so don't fully understand all of the code)
I am about to make an rc car which uses a wifi connection. The body for the car would be made from aluminium and the wifi receiver will be placed inside this aluminium casing. How do I make sure that this will work? Would I be forced to change my material or can I just make an extension for the receiver and make sure it is out of the casing? If so , would that really help me?
I am using an Arduino Uno to control an ESC for my (in progress) quadrocopter. I am currently using the servo library to control the ESC, which works great. Except.. A count of 100 is max speed, meaning I only have 10 speeds between 90 (stopped) and 100 (motor at full power) to correctly run my quadrocopter, I would like to have many more speed options. Any ideas? I'm having a hard time using a PWM signal, I might not be doing it right though. My current code is here: #include <Servo.h> Servo myservo; // create servo object to control a servo // a maximum of eight servo objects can be created int pos = 0; // variable to store the servo position void setup() { myservo.attach(8); // attaches the servo on pin 8 to the servo object } void loop() { int maxspeed=100; int minspeed=0; int delaytime=5; int count; for(count=0; count <1; count+=1) { for(pos = minspeed; pos < maxspeed; pos += 1) // goes from 0 degrees to 180 degrees { // in steps of 1 degree myservo.write(pos); // tell servo to go to position in variable 'pos' delay(delaytime); // waits 15ms for the servo to reach the position } for(pos = maxspeed; pos>=minspeed; pos-=1) // goes from 180 degrees to 0 degrees { myservo.write(pos); // tell servo to go to position in variable 'pos' delay(delaytime); // waits 15ms for the servo to reach the position } if(count>1){ break; } } myservo.write(92); delay(100000); myservo.write(90); delay(10000000); }
I'm trying to add my own robot in Morse 1.1 (using Ubuntu 12.04). I am struggling to add an armature actuator and armature pose sensor to an existing robot. Can someone please explain how this can be done (preferably with some sample code and using the socket interface). Thanks.
I searched for GPS devices that provide 1 sec updates to server, but I have not found any. I found this T=30s.Module has sent a monitoring data packet. After 12 seconds the server sends an acknowledgement. In 18 seconds later (T = 30s) module sends the next monitoring data packet Are there any products that take less than this time? Why do gps devices take this much time to send data?
As I understand it, a Kalman filter uses a mathematical model of the robot to predict the robot's state at t+1. It then combines that prediction with information from sensors to get a better sense of the state. If the robot is an aeroplane, how accurate/realistic does the model need to be? Can I get away with simple position and velocity, or do I therefore need an accurate flight model with computational fluid dynamics?
I'm trying to figure out a way that I can calculate the probability that a particle will survive the re-sampling step in the particle filter algorithm. For the simple case of multinomial re-sampling, I can assume that we can model the process like a Binomial distribution if we only care about one sample/particle. So if the particle has a weight of w that is also the probability that it will get selected in a step of the re-sampling. So we use 1 - P(k, p, n) where P is the Binomial distribution, k is 0 (we did not select the particle in all our tries), p is equal to w and n is equal to M, the number of particles. What is the case though in the systematic re-sampling, where the probability of a particle being selected is proportional but not equal to its weight?
Basically, I want to detect an ultrasonic beacon in a radius around the robot. The beacon would have a separate ultrasonic emitter while the robot would have the spinning receiver. Are there any existing ultrasonic sensors that would meet this use case or am I stuck hacking one together myself? Is ultrasonic even the best choice? I was hoping that the beacon would be kept in a pocket, so I figured optical sensors were out. Edit: The beacon and robot will both be mobile so fixed base stations are not an option.
Having read the article "This Car Has Electric Brains" in Popular Mechanics, August 1958 I have some questions. How practical were his methods? Was his work acquired by a car manufacturer or some other company? Were his methods developed further? How did his corner navigation work? I don't think he needed to know the distances of road segments; I think he could have used sonar or radar to detect a corner but if cars were entering the corner before him, he could misinterpret those cars as a wall and the absence of a corner. Additionally I think he'd need two sonar/radar systems on both sides of the cars which aren't mentioned; all that's mentioned is a set of relays. What is the compensator that is mentioned (it's said to function as a gyroscope)? I cannot find any information on this device (that I'm sure is relevant).
POMDPs are used when we cannot observe all the states. However, I cannot figure out when these POMDPs can be useful in robotics. What is a good example of the use of POMDPs? (I have read one paper where they used them, but I didn't find it obvious why pomdps should be used) What would be good projects ideas based on POMDPs?
I have a WL v262 quadcopter and I want to control it using Arduino instead of the joysticks on the transmitter. I opened up the transmitter and saw that each joystick has 2 potentiometers on the PCB and that the voltage for each pot goes from 0-3.3V. I used arduino's PWM and a low pass filter and connected the output of the filtered output to the potentiometer's analog pin which is connected to the PCB (I cannot desolder and take out the pots from the PCB) but even with this $V_{out}$ going onto the analog pin, my transmitter's display gave ???? Now I am really confused and frustrated because I don't know how else to control this transmitter other than attaching stepper motors to the joysticks and manually controlling the transmitter but this is really my last resort. Can someone help me with this? I have spent hours and hours trial and error but I am getting nowhere. Here is the PCB of the transmitter:
A Google search on "bloodstream nanobots" yields thousands of results and just on the first page, many results of blog posts that date back to 2009. It is nearly 4 years later. I've had no luck in finding any information on actual APPROVAL of these bots. Are there any countries at all who have approved this? People seem to have talked about it like crazy 4 years ago, yet, we're still not seeing anything.
I am interested in getting an arducopter with an ardupilot(APM). I read through the documentation and from what I understand, ardupilot is the low level hardware and firmware that directly controls the motors of the arducoptor. I would like to know if there is a higher level programmatic interface to the ardupilot? The mission planner provides a user interface to control the ardupilot. But is there a programmatic interface to control it? In other words, would it be possible for a user written 'linux process' to receive and send sensory data to and from the ardupilot respectively?
For examples if i have this robotic arm: Example, for the base rotation (5th DOF in the clip at 0:58), we know that the Z axis for that joint will be the same as the Z axis for the base frame{0}, but I don't know about Y and Z axises of the base rotation respects to the base frame, should they be the same or not ? And one more thing, for defining the frame of the base rotation (at 0:58 in the clip), the vertical arm pitch (at 0.47 in the clip) and the horizontal arm pitch (at 0:46 in the clip), it's pretty easy, but I don't know how to continue for defining the frame of wrist roll (at o.12 in the clip) and wrist pitch (0.23 in the clip) since the angle between the Z axis of wrist roll and the wrist pitch is now 90o. Thank you very much.
For a university course I have been asked to design a rough "specification" for a system that will deburr a plastic box that appears in a workspace. Due to irregularities in the boxes edges I cannot use simple position control and must use force control. I have so far decided on; Using an IR sensor to detect the box has appeared in the workspace. Use an Epson 2 axis robot to move around the work piece Use an ATI 6 axis force sensor to maintain a constant force against the edge of the box as the deburrer/robot moves around it. Is there a simple means of detecting the end of each side of the box ? A 0N force value would indicate reaching the edge of a box but it could also mean a breakage in the box which was also specified. How can I distinguish between the two ? Also does my work so far sound sensible ? Thanks for any help
Suppose I have a particle filter which contains an attitude state (we'll use a unit quaternion from the body to the earth frame for this discussion) $\mathbf{q}_b^e$. What methods should or should not be used for resampling? Many resampling schemes (e.g. this paper) seem to require the variance to be calculated at some stage, which is not trivial for $SO\{3\}$. Or, the variance is required when performing roughening. Are there any good papers on resampling attitude states? Especially those that re-sample complete poses (e.g. position and attitude)?
I would like to know if anyone here has used the Blueview SDK (Linux) for retrieval of images from the pings obtained by a multibeam sonar (P450, P900, etc.) ? If so, I'd like to know why would anyone get a null head when I trying to retrieve the head (eventually for the pings to be converted to an image) using the BVT_GetHead() method. My snippet for retrieving the image from a .son file (some_son_data.son) is given below: int main() { BVTSonar son = BVTSonar_Create(); BVTSonar_Open(son, "FILE", "some_son_data.son"); if (NULL != son) cout << "son not null" << endl; BVTHead head = NULL ; BVTSonar_GetHead(son, 0, &head); return 0; }
I'd like some well put video series of like 30 videos. Or anything but it needs to thorough and in easy English...less mundane. So far all resources i have found either go upto resistors code or of projects that tell you do this and this and this and tada you got this. Is there really no online resource for people to learn electronics. I want further master analog and do move on to digital cause it's better to spend 0.40 cents.... than spend $95 on components and get the whole thing on tiny chip. Please bare with me like six months i have been searching for legit source, material that is meant to teach you. I like pictures and colors.
UPDATE: This exact problem has been solved in StackOverflow. Please read this post there for further explanation and a working solution. Thanks! I am working on an application where I need to rectify an image taken from a mobile camera platform. The platform measures roll, pitch and yaw angles, and I want to make it look like the image is taken from directly above, by some sort of transform from this information. In other words, I want a perfect square lying flat on the ground, photographed from afar with some camera orientation, to be transformed, so that the square is perfectly symmetrical afterwards. I have been trying to do this through OpenCV(C++) and Matlab, but I seem to be missing something fundamental about how this is done. In Matlab, I have tried the following: %% Transform perspective img = imread('my_favourite_image.jpg'); R = R_z(yaw_angle)*R_y(pitch_angle)*R_x(roll_angle); tform = projective2d(R); outputImage = imwarp(img,tform); figure(1), imshow(outputImage); Where R_z/y/x are the standard rotational matrices (implemented with degrees). For some yaw-rotation, it all works just fine: R = R_z(10)*R_y(0)*R_x(0); Which gives the result: If I try to rotate the image by the same amount about the X- or Y- axes, I get results like this: R = R_z(10)*R_y(0)*R_x(10); However, if I rotate by 10 degrees, divided by some huge number, it starts to look OK. But then again, this is a result that has no research value what so ever: R = R_z(10)*R_y(0)*R_x(10/1000); Can someone please help me understand why rotating about the X- or Y-axes makes the transformation go wild? Is there any way of solving this without dividing by some random number and other magic tricks? Is this maybe something that can be solved using Euler parameters of some sort? Any help will be highly appreciated!
I am trying to build an advanced coloured lines following robot with the ability to differentiate between many different coloured lines and follow them. I am looking for the right sensor that will help my robot achieve its objective. As I was researching I came across the EV3 Colour Sensor which can detect up to 7 colours. Is this sensor suitable for my project? What other sensors can I use and how? Thank You
I am trying to get the usb.find command to work properly in a python script I'm writing on Angstrom for the Beagleboard. Here is my code: #!/usr/bin/env python import usb.core import usb.util import usb.backend.libusb01 as libusb PYUSB_DEBUG_LEVEL = 'debug' # find our device # Bus 002 Device 006: ID 1208:0815 # idVendor 0x1208 # idProduct 0x0815 # dev = usb.core.find(idVendor=0xfffe, idProduct=0x0001) # iManufacturer 1 TOROBOT.com dev = usb.core.find(idVendor=0x1208, idProduct=0x0815, backend=libusb.get_backend() ) I don't know what's missing, but here is what I do know. When I don't specify the backend, no backend is found. When I do specify the backend usb.backend.libusb01 I get the following error: root@beagleboard:~/servo# ./pyServo.py Traceback (most recent call last): File "./pyServo.py", line 17, in <module> dev = usb.core.find(idVendor=0x1208, idProduct=0x0815, backend=libusb.get_backend() ) File "/usr/lib/python2.6/site-packages/usb/core.py", line 854, in find return _interop._next(device_iter(k, v)) File "/usr/lib/python2.6/site-packages/usb/_interop.py", line 60, in _next return next(iter) File "/usr/lib/python2.6/site-packages/usb/core.py", line 821, in device_iter for dev in backend.enumerate_devices(): File "/usr/lib/python2.6/site-packages/usb/backend/libusb01.py", line 390, in enumerate_devices _check(_lib.usb_find_busses()) File "/usr/lib/python2.6/ctypes/__init__.py", line 366, in __getattr__ func = self.__getitem__(name) File "/usr/lib/python2.6/ctypes/__init__.py", line 371, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: python: undefined symbol: usb_find_busses What am I missing so that this will work properly? Thank you.
Given a robot with 2 wheels with radius r on one axle with length D, I want to set the wheel speed so that it turns to an angle phi as fast as possible. The timestep t is 64 milliseconds. I thought the wheel speed could be set to v = ((desired_heading-actual_heading) * circumference_wheel_trajectory)/(2*pi * t * wheel_radius). This will converge to a somewhat right angle, eventually, but its very slow and becomes slower as I approach the angle I want to be at. Is there an alternative/better way to do this?
I'm just wondering that is there any case that when algebraic way can't solve the problem while the geometric can ? Cause I'm working on a 2DOF robotics arm This one, I know the length of L1 and L2, location that I want for the end effector, then I tried calculating the angles by using algebraic but it gave me cos(alpha) > 1, but when I tried solving with geometric, I can find the solution, so is it because I use a wrong way in algebraic ? Thank you very much.
Is there a node or package that can send commands to /cmd_vel to move ATRV-Jr like 2 meters forward or turn it 90 degree to right/left? I don't want to tell the robot to move with specified speed. For example when I use this command rostopic pub /cmd_vel geometry_msgs/Twist '[1.0,0.0,0.0]' '[0.0,0.0,0.0]' the robot starts moving forward until I send another command or send break command.
I am taking information for my project and I need to see libraries ans SDKs. Searching in the web I found that OpenNI has a lot of functions and when I try to found another SDK, I dont find any other. I am working with a Kinect and a XTION so I need an SDK who works in both. Is there any other SDK o set of libraries that works well in both? Thanks!
I am trying to manually calibrate the on-board accelerometer of an APM 2.6 controller. I am using the following code (I found this somewhere, don't remember where) with Arduino 1.0.5 (in Windows environment) to fetch the accelerometer and gyro data: #include <SPI.h> #include <math.h> #define ToD(x) (x/131) #define ToG(x) (x*9.80665/16384) #define xAxis 0 #define yAxis 1 #define zAxis 2 #define Aoffset 0.8 int time=0; int time_old=0; const int ChipSelPin1 = 53; float angle=0; float angleX=0; float angleY=0; float angleZ=0; void setup() { Serial.begin(9600); pinMode(40, OUTPUT); digitalWrite(40, HIGH); SPI.begin(); SPI.setClockDivider(SPI_CLOCK_DIV16); SPI.setBitOrder(MSBFIRST); SPI.setDataMode(SPI_MODE0); pinMode(ChipSelPin1, OUTPUT); ConfigureMPU6000(); // configure chip } void loop() { Serial.print("Acc X "); Serial.print(AcceX(ChipSelPin1)); Serial.print(" "); Serial.print("Acc Y "); Serial.print(AcceY(ChipSelPin1)); Serial.print(" "); Serial.print("Acc Z "); Serial.print(AcceZ(ChipSelPin1)); Serial.print(" Gyro X "); Serial.print(GyroX(ChipSelPin1)); Serial.print(" Gyro Y "); Serial.print(GyroY(ChipSelPin1)); Serial.print(" Gyro Z "); Serial.print(GyroZ(ChipSelPin1)); Serial.println(); } void SPIwrite(byte reg, byte data, int ChipSelPin) { uint8_t dump; digitalWrite(ChipSelPin,LOW); dump=SPI.transfer(reg); dump=SPI.transfer(data); digitalWrite(ChipSelPin,HIGH); } uint8_t SPIread(byte reg,int ChipSelPin) { uint8_t dump; uint8_t return_value; uint8_t addr=reg|0x80; digitalWrite(ChipSelPin,LOW); dump=SPI.transfer(addr); return_value=SPI.transfer(0x00); digitalWrite(ChipSelPin,HIGH); return(return_value); } int AcceX(int ChipSelPin) { uint8_t AcceX_H=SPIread(0x3B,ChipSelPin); uint8_t AcceX_L=SPIread(0x3C,ChipSelPin); int16_t AcceX=AcceX_H<<8|AcceX_L; return(AcceX); } int AcceY(int ChipSelPin) { uint8_t AcceY_H=SPIread(0x3D,ChipSelPin); uint8_t AcceY_L=SPIread(0x3E,ChipSelPin); int16_t AcceY=AcceY_H<<8|AcceY_L; return(AcceY); } int AcceZ(int ChipSelPin) { uint8_t AcceZ_H=SPIread(0x3F,ChipSelPin); uint8_t AcceZ_L=SPIread(0x40,ChipSelPin); int16_t AcceZ=AcceZ_H<<8|AcceZ_L; return(AcceZ); } int GyroX(int ChipSelPin) { uint8_t GyroX_H=SPIread(0x43,ChipSelPin); uint8_t GyroX_L=SPIread(0x44,ChipSelPin); int16_t GyroX=GyroX_H<<8|GyroX_L; return(GyroX); } int GyroY(int ChipSelPin) { uint8_t GyroY_H=SPIread(0x45,ChipSelPin); uint8_t GyroY_L=SPIread(0x46,ChipSelPin); int16_t GyroY=GyroY_H<<8|GyroY_L; return(GyroY); } int GyroZ(int ChipSelPin) { uint8_t GyroZ_H=SPIread(0x47,ChipSelPin); uint8_t GyroZ_L=SPIread(0x48,ChipSelPin); int16_t GyroZ=GyroZ_H<<8|GyroZ_L; return(GyroZ); } //--- Function to obtain angles based on accelerometer readings ---// float AcceDeg(int ChipSelPin,int AxisSelect) { float Ax=ToG(AcceX(ChipSelPin)); float Ay=ToG(AcceY(ChipSelPin)); float Az=ToG(AcceZ(ChipSelPin)); float ADegX=((atan(Ax/(sqrt((Ay*Ay)+(Az*Az)))))/PI)*180; float ADegY=((atan(Ay/(sqrt((Ax*Ax)+(Az*Az)))))/PI)*180; float ADegZ=((atan((sqrt((Ax*Ax)+(Ay*Ay)))/Az))/PI)*180; switch (AxisSelect) { case 0: return ADegX; break; case 1: return ADegY; break; case 2: return ADegZ; break; } } //--- Function to obtain angles based on gyroscope readings ---// float GyroDeg(int ChipSelPin, int AxisSelect) { time_old=time; time=millis(); float dt=time-time_old; if (dt>=1000) { dt=0; } float Gx=ToD(GyroX(ChipSelPin)); if (Gx>0 && Gx<1.4) { Gx=0; } float Gy=ToD(GyroY(ChipSelPin)); float Gz=ToD(GyroZ(ChipSelPin)); angleX+=Gx*(dt/1000); angleY+=Gy*(dt/1000); angleZ+=Gz*(dt/1000); switch (AxisSelect) { case 0: return angleX; break; case 1: return angleY; break; case 2: return angleZ; break; } } void ConfigureMPU6000() { // DEVICE_RESET @ PWR_MGMT_1, reset device SPIwrite(0x6B,0x80,ChipSelPin1); delay(150); // TEMP_DIS @ PWR_MGMT_1, wake device and select GyroZ clock SPIwrite(0x6B,0x03,ChipSelPin1); delay(150); // I2C_IF_DIS @ USER_CTRL, disable I2C interface SPIwrite(0x6A,0x10,ChipSelPin1); delay(150); // SMPRT_DIV @ SMPRT_DIV, sample rate at 1000Hz SPIwrite(0x19,0x00,ChipSelPin1); delay(150); // DLPF_CFG @ CONFIG, digital low pass filter at 42Hz SPIwrite(0x1A,0x03,ChipSelPin1); delay(150); // FS_SEL @ GYRO_CONFIG, gyro scale at 250dps SPIwrite(0x1B,0x00,ChipSelPin1); delay(150); // AFS_SEL @ ACCEL_CONFIG, accel scale at 2g (1g=8192) SPIwrite(0x1C,0x00,ChipSelPin1); delay(150); } My objective use to calibrate the accelerometers (and gyro), so that I can use them without having to depend on Mission Planner. I'm reading values like: Acc X 288 Acc Y -640 Acc Z 16884 Gyro X -322 Gyro Y 26 Gyro Z 74 Acc X 292 Acc Y -622 Acc Z 16854 Gyro X -320 Gyro Y 24 Gyro Z 79 Acc X 280 Acc Y -626 Acc Z 16830 Gyro X -328 Gyro Y 23 Gyro Z 71 Acc X 258 Acc Y -652 Acc Z 16882 Gyro X -314 Gyro Y 22 Gyro Z 78 Acc X 236 Acc Y -608 Acc Z 16866 Gyro X -321 Gyro Y 17 Gyro Z 77 Acc X 238 Acc Y -642 Acc Z 16900 Gyro X -312 Gyro Y 26 Gyro Z 74 Acc X 226 Acc Y -608 Acc Z 16850 Gyro X -321 Gyro Y 26 Gyro Z 68 Acc X 242 Acc Y -608 Acc Z 16874 Gyro X -325 Gyro Y 27 Gyro Z 69 Acc X 236 Acc Y -576 Acc Z 16836 Gyro X -319 Gyro Y 19 Gyro Z 78 Acc X 232 Acc Y -546 Acc Z 16856 Gyro X -321 Gyro Y 24 Gyro Z 68 Acc X 220 Acc Y -624 Acc Z 16840 Gyro X -316 Gyro Y 30 Gyro Z 77 Acc X 252 Acc Y -594 Acc Z 16874 Gyro X -320 Gyro Y 19 Gyro Z 59 Acc X 276 Acc Y -622 Acc Z 16934 Gyro X -317 Gyro Y 34 Gyro Z 69 Acc X 180 Acc Y -564 Acc Z 16836 Gyro X -320 Gyro Y 28 Gyro Z 68 Acc X 250 Acc Y -596 Acc Z 16854 Gyro X -329 Gyro Y 33 Gyro Z 70 Acc X 220 Acc Y -666 Acc Z 16888 Gyro X -316 Gyro Y 19 Gyro Z 71 Acc X 278 Acc Y -596 Acc Z 16872 Gyro X -307 Gyro Y 26 Gyro Z 78 Acc X 270 Acc Y -642 Acc Z 16898 Gyro X -327 Gyro Y 28 Gyro Z 72 Acc X 260 Acc Y -606 Acc Z 16804 Gyro X -308 Gyro Y 31 Gyro Z 64 Acc X 242 Acc Y -650 Acc Z 16906 Gyro X -313 Gyro Y 31 Gyro Z 78 Acc X 278 Acc Y -628 Acc Z 16898 Gyro X -309 Gyro Y 22 Gyro Z 67 Acc X 250 Acc Y -608 Acc Z 16854 Gyro X -310 Gyro Y 23 Gyro Z 75 Acc X 216 Acc Y -634 Acc Z 16814 Gyro X -307 Gyro Y 27 Gyro Z 83 Acc X 228 Acc Y -604 Acc Z 16904 Gyro X -326 Gyro Y 17 Gyro Z 75 Acc X 270 Acc Y -634 Acc Z 16898 Gyro X -320 Gyro Y 31 Gyro Z 77 From what I understand, SPIread(...,...) returns an analog voltage value from the data pins of the sensor, which happens to be proportional to the acceleration values. Right? My question is: How do I go about calibrating the accelerometer? What I've tried till date: I've tried the "place horizontal... place nose down... left side, right side" technique used by mission planner. Basically, when placed on horizontal position, the sensor is experiencing +1g on it's Z axis and 0g in X and Y axis. Left/right side provides ±1g on Y axis and nose down/up provides ±1g on X axis. Now for every orientation, I've passed the raw sensor data through a LPF and then computed the mean, median and SD of this sensor data over 100 iterations. I store this mean, median and SD value in the EEPROM for each axis (one for +1g and one for 0g). Now, when I use the sensor, I load the stats from the EEPROM, match the mean/median and standard deviation with the current reading of 4/5 iterations. Here I'm working under the assumption that the values between 0g and +1g (and anything above 1g) can be interpolated/extrapolated from the data using a linear plot. Is this the correct approach for calibration? Can you suggest a better way? I noticed that the maxima/minima for each axis is different. Is this the expected outcome or is there something wrong in the code? What do I do with the gyro? How to calibrate for angular acceleration?
I am totally new to the camera interface and usage in an Embedded project, and would like to use a CMOS vision sensor like this.This project further will be used to power a small robot with on-board video processing power using processors like ARM 9. I do have a limitation that until now I have worked only on 8-bit micro-controllers like the atmega 8, 16, 32 and on the Arduino platform. I think that for better processing we can use Arduino Due. With the data sheet for the CMOS camera above, we can build its breakout board. But what next? I haven't I found any useful resources while searching. All I need to do is to capture a small video and store it in a SD card. I have seen these links but they haven't proved to be very useful as they don't provide me the required form factor. I am looking to interface this module to a customized board. So what so I need to understand about what commands they accept for their proper functioning like starting to take video and posting them out on a output pin. If we get a video on a output pin, to which pin should I take that output to on my controller, i.e. on UART or I2C or SPI?
I am new to robotics and control and I have been thinking about how to deal with problems in real life. I have passed a course in control, but I do not have any idea about control for discrete/digital systems. There are a lot of robots and in general dynamic systems which are controlled by microcontrollers or computers with some software, i.e. simulink. Usually there are sensors which send feedback to the microcontroller or the computer and the controller sends a signal w.r.t the input signal from sensors. I was wondering how we decide if the system is discrete or continuous? How one can decide if he should use discrete or continuous blocks in simulink to control a dynamic system. Does it really matter which one we use? After all computers are digital and I think it is easier to work with digital signals and also do we really have continuous signal? I have not passed any signals course, so my questions might be really easy. I did not find any other place for my question.
I am currently working on a project for school where I need to implement an extended Kalman Filter for a point robot with a laser scanner. The Robot can rotate with 0 degree turn radius and drive forward. All motions are piecewise linear (drive,rotate,drive). The simulator we are using does not support acceleration, all motion is instantaneous. We also have a known map (png image) that we need to localize in. We can ray trace in the image in order to simulate laser scans. My partner and I are little confused as to the motion and sensor models we'll need to use. So far we are modelling the state as a vector $(x,y,\theta)$. We are using the update equations as follows void kalman::predict(const nav_msgs::Odometry msg){ this->X[0] += linear * dt * cos( X[2] ); //x this->X[1] += linear * dt * sin( X[2] ); //y this->X[2] += angular * dt; //theta this->F(0,2) = -linear * dt * sin( X[2] ); //t+1 ? this->F(1,2) = linear * dt * cos( X[2] ); //t+1 ? P = F * P * F.t() + Q; this->linear = msg.twist.twist.linear.x; this->angular = msg.twist.twist.angular.z; return; } We thought we had everything working until we noticed that we forgot to initialize P and that it was zero, meaning that there was no correction happening. Apparently our propagation was very accurate as we haven't yet introduced noise into the system. For the motion model we are using the following matrix for F: $F = \begin{bmatrix}1 & 0 & -v*\Delta t*sin(\theta) \\ 0 & 1 & v*\Delta t*cos(\theta) \\ 0 & 0 & 1 \end{bmatrix}$ As its the Jacobian of our update formulas. Is this correct? For the sensor model we are approximating the Jacobian (H) by taking finite differences of the robots $x$, $y$ and $\theta$ positions and ray tracing in the map. We talked to the TA who said that this would work but I'm still unsure it will. Our prof is away so we can't ask him unfortunately. We are using 3 laser measurements per correction step so H is a 3x3. The other issue where having how to initialize P. We tried 1,10,100 and they all place the robot outside the map at (-90,-70) when the map is only 50x50. The code for our project can be found here: https://github.com/en4bz/kalman/blob/master/src/kalman.cpp Any advice is greatly appreciated. EDIT: At this point I've gotten the filter to stabilize with basic movement noise but no actual movement. As soon as the robot starts to move the filter diverges quite quickly and exits the map.
I am planning on building a robot with wheels (later legs, if possible), that can move around the room and analyze certain things, using a couple sensors. In the later steps more functions such a grabbing are the things I want to add. Could you recommend me a micro controller? My concern about Arduino is that there aren't enough slots, Raspberry Pi seems like it constantly needs a screen for the user. I am a complete amateur when it comes to robotics. However, I am quite familiar with the computer languages Java and Python. Since I wrote a fun app for Android for myself I would love the robot to be compatible with Android, too.
I am testing an industrial robot (ABB IRB 1410) using three simple Micron Dial gauges to get x,y,z values at particular point by varying Speed, Load and distance from home position. My questions are, Whether these three parameters influencing the repeatability or only the accuracy? Using dial gauges, without any relation to the Base frame, is it possible to measure accuracy? Is any other cost effective method to measure the repeatability and accuracy like above method?
I'd like to build a robot as small as possible and with as few "delicate" parts as possible (the bots will be bashing into each other). I was wondering if it was possible to use a small chip that could receive bluetooth/IR/wifi commands to move the motors, and in turn, send back feedback based on sensors such as an accelerometer (to detect impact). I can probably achieve something like this with the PiCy however this is slightly bigger than I'd like (due to the size of the Pi) and I'm not sure how long the Pi would last taking continuous impacts. I'd therefore like to try to offset the brain (the Pi) to the side of the arena and just use a small chip to receive move commands, and send back data from the accelerometer. Do you have any recommendations for such a chip? Wifi would be my choice but if it impacts the size I could try BT Edit: After further research it seems an Arduino nano with a WiFi RedBack shield might do the job along with something like this for the motors: http://www.gravitech.us/2mwfecoadfor.html
http://www.youtube.com/watch?v=4Vh_R1NlmX0 is from 2011 SWARM and shows RC aircraft or combat wings trying to hit each other in the air. Scoring a hit is pretty rare, and I'd like to increase a pilot's chances by using a computer targeting system. It would be an offline system that gets data from sensors on the airplane. What sensor(s) would work for this application?
I'm building a project that uses a cell phone to control a microcontroller via Bluetooth. I've decided to use the HC-05 Bluetooth module. HC-05 Manual: http://www.exp-tech.de/service/datasheet/HC-Serial-Bluetooth-Products.pdf And the phone I'm using is the Nokia C3-00 (series 40). http://developer.nokia.com/Devices/Device_specifications/C3-00/ The HC-05 module uses the SPP Bluetooth profile while my phone only supports DUN, FTP, GAP, GOEP, HFP, HSP, OPP, PAN, SAP, SDAP profiles. But to my knowledge the phone API utilizes RFCOMM. Question is, can I use this Bluetooth module with my phone? Thanks in advance and my apologies if my question is too trivial as I'm quite new to Bluetooth. -Shaun
I'm trying to get a quad rotor to fly. The on board controller is an Ardupilot Mega 2.6, being programmed by Arduino 1.0.5. I'm trying to fly it in simple autonomous mode, no Radio controller involved. I've done a thorough static weight balancing of the assembly (somewhat like this: http://www.youtube.com/watch?v=3nEvTeB2nX4) and the propellers are balanced correctly. I'm trying to get the quadcopter to lift using this code: #include <Servo.h> int maxspeed = 155; int minspeed = 0; Servo motor1; Servo motor2; Servo motor3; Servo motor4; int val = 0; int throttleCurveInitialGradient = 1; void setup() { val = minspeed; motor1.attach(7); motor2.attach(8); motor3.attach(11); motor4.attach(12); } void loop() { setAllMotors(val); delay(200); val>maxspeed?true:val+=throttleCurveInitialGradient; } void setAllMotors(int val) { motor1.write(val); motor2.write(val); motor3.write(val); motor4.write(val); } But the issue is, as soon as the quadcopter takes off, it tilts heavily in one direction and topples over. It looks like one of the motor/propeller is not generating enough thrust for that arm to take-off. I've even tried offsetting the weight balance against the direction that fails to lift, but it doesn't work (and I snapped a few propellers in the process); Is there something wrong with the way the ESCs are being fired using the Servo library? If everything else fails, am I to assume there is something wrong with the motors? Do I need to implement a PID controller for self-balancing the roll and pitch just to get this quadrotor to take off? Edit 1: Thanks for all the replies. I got the PID in place. Actually, it is still a PD controller with the integral gain set to zero. Here's how I'm writing the angles to the servo: motor1.write((int)(val + (kP * pError1) +(kI * iError1) +(kD * dError1))); //front left motor2.write((int)(val + (kP * pError2) +(kI * iError2) +(kD * dError2))); //rear right motor3.write((int)(val + (kP * pError3) +(kI * iError3) +(kD * dError3))); //front right motor4.write((int)(val + (kP * pError4) +(kI * iError4) +(kD * dError4))); //rear left kI is zero, so I'll ignore that. With the value of kP set somewhere between 0.00051 to 0.00070, I'm getting an oscillation of steady amplitude around a supposed mean value. But the problem is, the amplitude of oscillation is way too high. It is somewhere around +/- 160 degrees, which looks crazy even on a tightly constrained test rig. [ Edit 2: How I calculate the term 'pError' - Simple linear thresholding. I've a precomputed data of the average readings (mean and SD) coming out of the gyro when the IMU is steady. Based on the gyro reading, I classify any motion of the setup as left, right, forward or backward. For each of these motion, I increase the pError term for two of the motors, i.e, for right tilt, I add pError terms to motors 2 & 3, for left tilt, I add pError term to motors 1 & 4 etc. (check the comment lines in the code snippet given above). The magnitude of error I assign to the pError term is = abs(current gyro reading) - abs(mean steady-state gyro reading). This value is always positive, therefore the side that is dipping downwards will always have a positive increment in RPM. ] As I crank up the derivative gain to around 0.0010 to 0.0015, the oscillation dampens rapidly and the drone comes to a relatively stable attitude hold, but not on the horizontal plane. The oscillation dies down (considerably, but not completely) only to give me a stable quadrotor tilted at 90 - 100 degrees with horizontal. I'm using only the gyros for calculating the error. The gyros were self calibrated, hence I do expect a fair amount of noise and inaccuracy associated with the error values. Do you think that is the primary reason for the high amplitude oscillation? One other probable reason might be the low update frequency of the errors. I'm updating the errors 6 times a second. Could that be a probable reason it is taking longer to stabilise the error? And, for the steady state error after the wild oscillations dampen, is it necessary to fine tune the integral gain to get rid of that? Please help. Edit 3: I cranked up the frequency of operation to 150+ Hz and what I get now is a very controlled oscillation (within +/- 10 degrees). I'm yet to tune the derivative gain, following which I plan to recompute the errors for the integral gain using a combination of gyro and accelerometer data. Edit 4: I've tuned the P and D gain, resulting in +/- 5 degrees oscillation(approx). I can't get it to any lower than this, no matter how much I try. There are two challenges about which I'm deeply concerned: After 5 to 8 seconds of flight, the quadcopter is leaning into one side, albeit slowly. A) Can this drift be controlled by tuning the integral gain? B) Can the drift be controlled by using accelerometer + gyro fused data? C) Given that my drone still shows +/- 5 degrees oscillation, can I consider this the optimal set point for the proportional and derivative gains? Or do I need to search more? (In which case, I'm really at my wits end here!)
I am a beginner of ROS, Kinect and Ubuntu. What I want is to visualize Kinect's data on rviz environment then run object recognition on it. I've tried a few tutorials but had no luck. All I got was an empty rviz world. Since I am a beginner I would appreciate any step-by-step instructions (preferably for hydro or groovy). I would also like to note that I've managed to get visual from Kinect so the device is working fine.
If not all, but major types of lawn mower robots are rotary mowers. I presume1 that reel mower is more efficient, and is said to leave a better lawn health and cut. So, why industry go to the other option? 1 - I'm assuming the efficiency, as electrical rotary mowers have at least 900W universal-motors or induction motors, and a manual reel mower is capable nearly the same cutting speed.
I was wondering, my team and me are working on a robot communication-oriented and we wanted to add speech recognition on it. What technology should I use ?
That is what I came to understand while reading here and there about flashing a new bootloader/understanding what a bootloader is etc etc The bootloader is supposed to be the first thing that runs when I power up my Arduino Duemilanove (or micro controllers in general). It does some setup then runs my app. It also listens to the usb cable so that if I upload some code it erases the old one and run the new one. There are 2 sections in the memory, one for the bootloader (S1) and one for the app (S2). Code on S1 can write to S2 but not to S1 (or strongly discouraged I don't remember). There are things that I don't understand though : If I upload some new code while my app is running, the upload works. What happened ? I thought that the bootloader gave hand to my app How can we flash a new bootloader ? If the bootloader is the thing that runs on section 1 (S1) and can only write to S2 and if the bootloader is the only thing that listens to new code uploads, ... Can you help me correct my thoughts and answer my questions ? Many thanks !
I'm a newbie in UAV stuff, your advice would be very helpful, i want to start mapping using fixed wing UAV, but my main choice was APM 2.6, but after some researches, i found that APM 2.6 won't be actively maintained in the future because the future releases will be PixHawk. i wonder if i should choose APM 2.6 for its stability, on the other side i don't see the benefits of Pixhawk apart having long time support. or being a newbie i should start with something experimental like APM 2.5.2 (cheap chinese version for APM). Thanks in advance
Everybody here is probably aware of the Sharp distance sensors (GP2Y0 series, e.g. GP2Y0A02YK0F). They use a diode to emit infrared light and measure the angle of the reflected light with a PSD (i.e. they do triangulation). They seem to be the only producers of this technology. I am only aware of a few similar but incomparable devices (sensors of ambient light and distance or proximity like Si114x). Which other comparable products are out there? Another way to ask this question: "What are the different ways to build a 10cm - 200cm range low-cost IR range sensor, and what is an example of each of those ways?"
We are planning to recalibrate ABB IRB 1410 robot and conduct series of accuracy & repeatability tests using FaroArm. My questions are i) Is there any physical identification marker on the robot which can be used to identify the location of base co-ordinate frame? ii) If locating the base frame is not possible, can accuracy be measured from fixed arbitrary point in space?
BEAM robotics seem to be a good approach to teach learners about electronics in robotics. But can these robots be like regular programmed "cognitive" robots? Can these robots, with just analog circuits, take us to the level of robotic assistants, worker robots and other kinds of self sufficient autonomous robots? I specifically want to know that, when creating mission critical robots -> 1) What are the areas in robotics which are practically impossible without a real time software system? 2) What areas of the field can be done without programming? If yes, are these areas feasible without an onboard software system? 3) Could an intelligent space rover, work without a cpu in the future?
I'm having an event for a boat race.Simple boat has to be made.All i have is 5 days.The restriction is 24V motor not more than 1000 rpm.What best material and shape will you suggest to make a boat.I know basic circuits.We have to make a boat with a wired circuit.That circuiting i can do but what can be an ideal shape for boat with maximum speed it can achieve?
I'm having a problem with controlling my BLDC motor when starting up and when running in low rpm. I have a custom board to measure rotation of the motor using an optical sensor and send servo pwm commands to an esc. The problem is, that I can't start the motor smoothly. When I slowly increase the control signal, it starts stuttering and then jumps directly up to about 1500rpm. Is there a way to improve this situation without using a sensored motor/esc combo?
I am beginning to learn about the hardware aspect of robotics, and in order for a lot of this new information to be useful to me (whether on this site or elsewhere) I will need a basic understanding of the terminology. One thing that comes up repeatedly is different electric motors: servo, DC motor, brushless motor, step motor, gear motor... etc Is there a comprehensive list? Or at least a list of the most common ones, and their descriptions / differences?
In software engineering startups, you generally go to a room with a computer or bring your own laptop, and write code. I'm interested in how robotics startups work: Is there a separate location for designing the robots? Take for example, Anki. Do they have separate research labs for designing robots? How does a robot get from a single design to being manufactured? I couldn't find a better place on SE to post this (the startups business section is defunct): Please link me to another SE site if there is a better place to ask this question.
I have a BOE bot PBasic2 stamp based robot that came with rubberband "tires" for the wheel. however, they are very tight and I can't figure out how to get them onto the plastic hubs. the furthest I've gotton was mostly covering the outside, but when trying to make it less crooked it came off again. is there some trick to getting those pesky tires to stay on?
I'm programming a PIC16F77 with ProPic 2 which communicates via serial port. As I don't have this port in my PC, I used serial to USB adapter. I'm using ICProg in Windows 8. I've proggrammed it before but it was in Windows XP using the driver who specifies in http://www.ic-prog.com/index1.htm and worked perfectly. But in this OS the only difference is the adapter, the program gives some errors while loading the driver: "Error occured (Access is denied) while loading the driver!" "Privileged instruction"
I have a Micro Magician v2 micro controller. It has a A3906 Dual FET “H” bridge motor driver built in. In the manual it states "Electronic braking is possible by driving both inputs high." My first question is, what is the purpose of these brakes? If I set the left/right motor speed to 0, the robot stops immediately anyway. What advantage is there to using these brakes, or am I taking the word "brake" too literally? My second question is, the driver has "motor stall flags that are normally held high by pullup resistors and will go low when a motor draws more than the 910mA current limit. Connect these to spare digital inputs so your program will know if your robot gets stuck." But when my robot hits a wall, the wheels just keep on spinning (slipping if you will), I take it these stall flags can be used on a rough surface where the wheels have more friction?
Over the last couple of years I've had good success with my technology startups and now looking to enter into robotics. I was interested in robotics and automation ever since I was a kid (yes, that sounds nerdy). So my question is: Where to get started, what to build? and how to sell? And lastly, how difficult it is to sell in this industry?
I am wondering if it would be possible to get Kinect to work with Udoo board (Quad). I have found that there is now support for ROS + Udoo. Also saw a question asked about Xtion + Udoo which shows some more interest. It would really be great if it could be possible for Kinect+Udoo. Was hoping to implement perhaps a miniature version of TurtleBot. I wish someone could give some insights on this matter. Thanks.
I'm building a quadcopter. It will be controlled by a Beaglebone black with several Sensors and a cam. I new to the quadcopter stuff, therefore it would be nice if someone could have a look at my setup before I buy the parts. Frame: X650F - 550mm Battery: Turnigy nano-tech 5000mah 4S 25~50C Lipo Pack Motor: NTM Prop Drive 28-30S 800KV / 300W Brushless Motor ESC: Skywalker 4x 25A Brushless This sums up to ~ 2kg. Giving me still some room for about 700g payload. What do you think? Did I miss something important? Better ideas for some parts?
I would like to ask a question about zero crossing event in a trapezoidal commutation on a brush-less DC motor. Here is a waveform that shows that the zero crossing event occurs every 180 electrical degrees in a sinusoidal commutation: But what about trapezoidal commutation. Here is the waveform that I found about the trapezoidal commutation: So as you see, the zero crossing occurs 30 electrical degrees after the previous commutation and 30 electrical degrees before the next commutation. In a motor with one pole pair, we would have 30 electrical degrees = 30 mechanical degrees, so we would have this waveform: You see that the zero crossing in phase A occurs when the magnet faces the phase C, or in other words, after 30 electrical degrees from the last commutation. My question in why does the zero crossing happen at that moment, why not after 60 electrical degrees, or 15 electrical degrees? Is it related to some law's of induction? What are those law and how do this law's appear in this motor? Can someone explain to me this with some pics?
I would like to ask a question about zero crossing event in a trapezoidal commutation on a brush-less DC motor. Here is the waveform that shows that the zero crossing event occurs every 180 electrical degrees in a sinusoidal commutation: But what about trapezoidal commutation. Here is the waveform that I found about the trapezoidal commutation: So as you see, the zero crossing occurs 30 electrical degrees after the previous commutation and 30 electrical degrees before the next commutation. In a motor with one pole pair, we would have 30 electrical degrees = 30 mechanical degrees, so we would have this waveform: You see that the zero crossing in phase A occurs when the magnet faces the phase C, or in other words, after 30 electrical degrees from the last commutation. My question in why does the zero crossing happen at that moment, why not after 60 electrical degrees, or 15 electrical degrees? Is it related to some law's of induction? What are those law and how do this law's appear in this motor? Can someone explain to me this with some pics?
I'm programming Lua for controlling computers and robots in-game in the Minecraft mod ComputerCraft. ComputerCraft has these robots called Turtles, that are able to move around in the grid based(?) world of Minecraft. They are also equipped with sensors making them able to detect blocks (obstacles) adjacent to them. Turtles execute Lua programs written by a player. As a hobby project I would like to program a goto(x, y, z) function for my Turtles. Some Turtles actually have equipment to remove obstacles, but I would like to make them avoid obstacles and thus prevent the destruction of the in-game environment. I have no prior experience in robotics, but I have a B.Sc. in Computer Science and am now a lead web developer. I did some research and found some basic strategies, namely grid based and quadtree based. As I have no experience in this area, these strategies might be old school. Note that Turtles are able to move in three dimensions (even hover in any height). I could share the obstacles as well as obstacle free coordinates in a common database as they are discovered if that would help me out, as most obstacles are stationary once they are placed. What are my best options in this matter? Are there any easy fixes? Where do I look for additional resources? Thank you very much in advance! :-) EDIT: Thank you for your feedback! I started reading the book Artificial Intelligence: A Modern Approach, 3rd Edition to get up to speed on basic theory as suggested by Ian. Pointers to other educational resources are appreciated. Also, I started developing a basic navigation algorithm for moving in unexplored areas, similar to what Cube suggested. The priority for me is as few moves as possible, as it costs time and fuel cells for each additional move (approx. 0.8 seconds and 1 fuel cell per move in either direction). I plan on using the Euclidean heuristics function in a Greedy Best-First Search for computing a path that is expected to be quite optimal in reducing the number of moves to reach the goal, if enough data is available from the shared database from previous exploration. Each time an obstacle is reached, I plan to use the following very basic algorithm, exploiting the fact that Turtles are able to move vertically: 1. Calculate direct horizontal path to the goal. 2. Turn to the direction of the next step of the path. 3. If an obstacle is detected in front of the Turtle go to 5. If this is the 4th time that an obstacle is detected in front of the Turtle after moving up, go to 6. 4. Move forward, go to 2. 5. If no obstacle is detected above the Turtle, move up and go to 3, else go to 7. 6. Backtrack to the coordinates the Turtle was in before moving upwards. 7. Turn left, go to 3. When using this algorithm, records are kept of the explored coordinates and uploaded to a shared database. However, there are some cases, that I did not consider: - When should it move down? - What if the goal is not reachable from a coordinate directly above it? - If no horizontal move in any direction is possible, how long should it backtrack? - How to detect unreachable goals (obstacles can then be removed if requested) Maybe if enough exploration data of the area is available, a Jump Point Search is performed to calculate an optimal path. However this assumes a 2D map. How can I take the 3rd dimension into account? Also, what would be a good data structure to store the exploration data?
I am trying to simulate a quadcopter model on Simulink. I want to implement a PID controller for each of X,Y,Z and phi,theta, psi angles. PID gets the error, as input, which is to be minimized. For the X,Y and Z, the desired values are entered by the user and the actual values are calculated from the accelerometer data, hence, the error is the desired set value - actual value. For phi,theta and psi, the actual values may be obtained from the gyroscope and accelerometer (sensor fusion) but I don't actually know how to calculate the desired values for each one of them since the user is usually interested in giving the position values X,Y and Z as desired not the angle values! The absence of the desired values prevents me form calculating the angular error which is needed for the PID controller.
I'm reading Probabilistic Robotics by Thrun. In the Kalman filter section, they state that $$ x_{t} =A_{t}x_{t-1} + B_{t}u_{t} + \epsilon_{t} $$ where $\epsilon_{t}$ is the state noise vector. And in $$ z_{t} = C_{t}x_{t} + \delta_{t} $$ where $\delta_{t}$ is the measurement noise. Now, I want to simulate a system in Matlab. Everything to me is straightforward except the state noise vector $\epsilon_{t}$. Unfortunately, majority of authors don't care much about the technical details. My question is what is the state noise vector? and what are the sources of it? I need to know because I want my simulation to be rather sensible. About the measurement noise, it is evident and given in the specifications sheet that is the sensor has uncertainty ${\pm} e$.
I need to find a way to solve invrese kinematics for Comau SMART-3 robot. Could you give me a few hints where to start looking? I have no idea about robotics and I couldn't find an algorithm for this specific robot.
From what I've read so far, it seems that a Rao-Blackwellized particle filter is just a normal particle filter used after marginalizing a variable from: $$p(r_t,s_t | y^t)$$ I'm not really sure about that conclusion, so I would like to know the precise differences between these two types of filters. Thanks in advance.
I Have an old audio amplifier that has those switches to turn it on. I'm looking for the simplest motor/robotic arm (or any other relevant component) to control this switch - eventually via Raspberry Pi . Are there any options ?
Can a gyroscopic sensor (comparable to the type that are typically used in smartphones) that is embedded in this black object that is rotating around the X axis measure the number of rotations around the X axis if the object may or may not also be rotating at the same time in random ways (number of partial or full rotations, speeds, and directions) around the Z axis? If so, is the Z axis rotation irrelevant, or is there special mathematics involved in filtering out the affects of the Z rotation on the measurement of X axis rotation? Or does another measurement such as acceleration or magnetism need to be used to solve the problem? Is there any impact in using a 2-axis vs. a 3-axis gyroscopic sensor for this measurement scenario?
We want to create robot that will localize itself by the signals of wifi routers. Which sensors should we buy to detect strength of 3 WiFi signal? Which of following is necessary for us? http://www.dfrobot.com/index.php?route=product/category&path=45_80 or can be any other more suitable variants? We are using arduino as a platform.
I previously thought that an accelerometer on a quadcopter is used to find the position by integrating the data got from it. After I read a lot and watched this Youtube video (specifically at time 23:20) about Sensor Fusion on Android Devices, I seem to get its use a little correct. I realized that it's hard to filter out the considerable noise, generated from error integration, to get useful information about the position. I also realized that it is used along with the gyroscope and magnetometer to for fused information about orientation not linear translation. For outdoor flight, I thought of the GPS data to get the relative position, but is it so accurate in away that enables position measurement (with good precision)? How do commercial quadcopters measure positions (X,Y and Z)? Is it that GPS data are fused with the accelerometer data?
I need help in differentiating between AI and Robotics. Are AI and Robotics two different fields or is robotics a subject in AI? I want to pursue a career in AI and Robotics. So I need your valuable suggestion. I searched the web and also some universities that I want to apply and I cannot find any such thing that I am searching for.
I am having difficulty sustaining a connection between my Raspberry Pi (Model B running Raspbian) and my Arduino (Uno) while sending signals from the Raspberry Pi to a continuously rotating servo (PowerHD AR- 3606HB Robot Servo) via Python. I'm not sure if there is a more efficent way of sending servo instructions via Python to the Arduino to rotate the servo. I'm attempting to communicate signals from the Raspberry Pi to the Arduino via USB using what I believe is considered a "digital Serial connection". My current connection: Wireless Xbox 360 Controller -> Wireless Xbox 360 Controller Receiver -> Raspberry Pi -> Externally Powered USB Hub -> Arduino -> Servo Servo connection to Arduino: Signal (Orange) - pin 9 Power (Red) - +5 V Ground (Black) - GND On the Raspberry Pi I have installed the following (although not all needed for addressing this problem): xboxdrv pyserial Python-Arduino-Command-API PyGame lego-pi Arduino The sketch I've uploaded to the Arduino Uno is the corresponding sketch provided with the Python-Arduino-Command-API. *Again, I'm not positive that this is the best method means of driving my servo from Python to Arduino (to the servo). From the Raspberry Pi, I can see the Arduino is initially correctly connected via USB: pi@raspberrypi ~/Python-Arduino-Command-API $ dir /dev/ttyA* /dev/ttyACM0 /dev/ttyAMA0 and pi@raspberrypi ~/Python-Arduino-Command-API $ lsusb Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 045e:0719 Microsoft Corp. Xbox 360 Wireless Adapter Bus 001 Device 005: ID 1a40:0201 Terminus Technology Inc. FE 2.1 7-port Hub Bus 001 Device 006: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter Bus 001 Device 007: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 001 Device 008: ID 2341:0043 Arduino SA Uno R3 (CDC ACM) From the Raspberry Pi, I'm able to rotate the servo as a test clockwise for one second, counter-clockwise for one second, then stop the servo, with the following Python script: #!/usr/bin/env python from Arduino import Arduino import time board = Arduino(9600, port='/dev/ttyACM0') board.Servos.attach(9) # Declare servo on pin 9 board.Servos.write(9, 0) # Move servo to full speed, clockwise time.sleep(1) # Sleep for 1 second print board.Servos.read(9) # Speed check (should read "0") board.Servos.write(9, 180) time.sleep(1) print board.Servos.read(9) # (Should read "180") board.Servos.write(9, 90) print board.Servos.read(9) board.Servos.detach(9) The output via the Raspberry Pi terminal reads: 0 180 90 Although this only performs full-speed in both direction (as well as the calibrated "stop" speed of 90), I have successfully alternated from a full-speed to slower speeds, for example, going from 0 up to 90 in increments of 10. From the Raspberry Pi, I'm able to send input from my Xbox controller to drive the servo with a small custom Python script I've created along with xboxdrv (which works flawlessly with other projects I'm doing): #!/usr/bin/python from legopi.lib import xbox_read from Arduino import Arduino # To catch Ctrl+C import signal import sys # The deadzone within which we ignore inputs, approximately 1/3 of total possible input DEADZONE = 12000 def signal_handler(signal, frame): print "Stopping Wrapper" sys.exit(0) # Capture Ctrl+C so we can shut down nicely signal.signal(signal.SIGINT, signal_handler) print "Starting Wrapper" print "Press Ctrl+C at any time to quit" board = Arduino(9600, port='/dev/ttyACM0') board.Servos.attach(9) board.Servos.write(9, 90) for event in xbox_read.event_stream(deadzone=DEADZONE): print "Xbox event: %s" % (event) # If the RB button it's being held, rotate the servo counter-clockwise at full-speed. # When the RB button is released, stop the servo. if(event.key=='RB'): if(event.value>0): board.Servos.write(9, 180) print board.Servos.read(9) else: board.Servos.write(9, 90) print board.Servos.read(9) continue This script runs, and I'm able to control the servo using the RB button on my controller. However, it eventually fails - sometimes after minutes, sometimes after seconds (rapid and intermittent input seemingly having no influence on expediting a crash). Input is no longer read by the script, the terminal comes to a halt, the servo freezes on whatever the last command given was (either spinning endlessly or stopped), and I'm forced to Ctrl + C out of the script. If I check to see if the Arduino is still connected to the Raspberry Pi, it shows that it has reconnected itself to the Raspberry Pi as "ttyACM1" (from /dev/ttyACM0 to /dev/ttyACM1): pi@raspberrypi ~/robotarm $ dir /dev/ttyA* /dev/ttyACM1 /dev/ttyAMA0 Why does the Arduino reconnect itself? Is there some other way I should be processing this information? Distance to the wireless Xbox receiver is not a factor as all of these pieces are adjacent to one another for testing purposes. It will prove impossible to use this servo as a wheel for my robot if I'm constantly tending to this issue.
I've spent quite some time researching this, but most of my Google search results have turned up academic research papers that are interesting but not very practical. I'm working on a target/pattern recognition project** where a robot with a small camera attached to it will attempt to locate targets using a small wireless camera as it moves around a room. The targets will ideally be as small as possible (something like the size of a business card or smaller), but could be (less ideally) as large as 8x10 inches. The targets will be in the form of something easily printable. The pattern recognition software needs to be able to recognize if a target (only one at a time) is in the field of vision, and needs to be able to accurately differentiate between at least 12 different target patterns, hopefully from maybe a 50x50 pixel portion of a 640x480 image. Before playing with the camera, I had envisioned using somewhat small printed barcodes and the excellent zxing library to recognize the barcodes. As it turns out, the camera's resolution is terrible - 640x480, and grainy and not well-focused. Here is an example still image. It's not very well-suited for capturing barcodes, especially while moving. I think it could work with 8x10 barcodes, but that's really larger than I'm looking for. (I'm using this particular camera because it is tiny, light, cheap, and includes a battery and wi-fi.) I'm looking for two things: a suggestion or pointer to an optimal pattern that I could use for my targets, and a software library and/or algorithm that can help me identify these patterns from images. I have NO idea where to start with the right type of pattern so suggestions there would really help, especially if there is a project out there that does something resembling this. I've found OpenCV and OpenSIFT which both seem like potential candidates for software libraries, but neither seemed to have examples of doing the type of recognition I'm talking about. I'm thinking picking the right type of pattern is the big hurdle to overcome here, so any pointers to the optimal type of pattern would be great. Being able to recognize the pattern from all different angles is a must. So far, my idea is to use patterns that perhaps look something like this, where the three concentric color rings are simply either red, green, or blue - allowing for up to 27 unique targets, or 81 if I use 4 rings. From about 2 feet, the capture of a 3x3 inch target (from my computer screen) looks like this which seems like it would be suitable for analysis but I feel like there should be a better type of pattern that would be more compact and easier to recognize - maybe just a plain black and white pattern of some sort with shapes on it? Pointers to an optimal approach for this are greatly appreciated.
I've gone through tutorials on how to build circuits and control dc, stepper, and servo motors. I may not understand everything about them internally, but i have a good basic foundation. Now i'm at a loss for where to go from here. I'm more interested in learning how to make mechanical devices with them than just the electronics behind the devices. While i know that they go hand in hand, i want to learn more about the mechanical aspects of using motors. I have in mind several ultimate goal projects that i want to work toward, like home automation, model rc vehicles, autonomous robots, etc... But i'm sure that there is more to mechanics that i need to learn before i can jump into a project like that. He who will learn to fly one day must first learn to stand and walk. Are there hobbyist mechanical starter kits or starter projects to learn how to make effective use of electric motors? I don't necessarily need a specific product endorsement, but rather a general idea of what important concepts to learn and materials / projects to help me learn them. My apologies if this question is too broad. I can refine it if deemed necessary.
at the moment I am creating an android program, that will steer my simple, 3 wheel (2 motors, 1 for balance) robot to move online following the path drawn by user on his screen. The robot is operated through WiFi and has 2 motors that will react on any input signals. Imagine user drawing a path for this robot on smartphone screen. It has aquired all the points on XY axis, every time beginning with (0,0). Still I have no idea, how to somehow "convert" just points, into voltage input to both motors. Signals will be sent in approx. 60Hz connection, so quite fast. Maybe not every single axis point will be taken into consideration, there will be surely some skips, but that is irrelevant, since this path does not have to be done perfectly by the robot, just in reasonable error scale. Do you have any idea on how to make the robot follow defined axis points that overall create a path? Edit 10.01: The voltage will be computed by the robot, so input on both is between -255 and 255 and the velocity should increase or decrease lineary in those borders. Additionaly, I would like to solve it as if there were perfect conditions, I don't need any feedback crazy models. Let's assume that all the data is true, no sensors and additional devices. Just XY axis path and required input (ommit wheel slide too).
Orrery is a clockwork model of the solar system. I am trying to emulate one in 2D. Now, to emulate, I need to know what goes on inside. Can someone please explain the basic principle behind the clockwork? Or direct me to a resource that will explain all the machinery inside a simple Orrery.
My task is to apply forces to control 3-dof parallel manipulator. Forces are applied to linear actuators, friction is neglected. End-effector of a robot is supposed to follow generated path; for this example, let it be a simple circle. So far I have made a simplified 3d model of robot and calculated inverse kinematics. Promoter of my engineering work don't really know how to do this, but he said that calculating forward dynamics is too complex and i shouldn't go that way. Could you tell me what will be the easiest way to go?
I'm running a kk2.0 + 4 20A Multistar ESCs + 4 EMax GF 2215-20 motors + 4 Slow Fly Props After about a foot off the ground, the entire quadcopter starts wobbling like crazy (no auto-level). Any ideas? I'll add some video if needed.
I am in the process of creating a Power Prediction Model for the Hubo Robot. The Robot has 38 Degrees of Freedom and has a computer some sensors and motor boards. The motors are powered through Motor Boards. All these boards are powered through a main power board that exists at the robots chest. My model should be able to predict the power for any trajectory of the robot. Say for instance if the robot raises its hand from 0 degrees to 180 degrees my model should be able to predict the power. Heres an idea I came across. My idea was to equate the electrical torque to the mechanical torque of each joint. For instance if the Right arm pitch moves from 0 to 180 degrees I can do as follows ? $mgsin(\theta)= Kt*I$ However, I am not getting a proper prediction and the current value is way off than what we can read from a software installed in the robot. I know there are losses but even then its off. I was wondering if there are any other approaches or a fault in my approach. And after I do this I can add all the joint currents for a specific trajectory and then give a estimate for total power consumption.
In continuation of the question I asked here: Quadcopter instability with simple takeoff in autonomous mode ...I'd like to ask a few questions about implementing a basic PID for a quadrotor controlled by an APM 2.6 module. (I'm using a frame from 3DRobotics) I've stripped down the entire control system to just two PID blocks, one for controlling roll and another for controlling pitch (yaw and everything else... I'd think about them later). I'm testing this setup on a rig which consists of a freely rotating beam, wherein I've tied down two of the arms of the quadrotor. The other two are free to move. So, I'm actually testing one degree of freedom (roll or pitch) at a time. Check the image below: here A, B marks the freely rotating beam on which the setup is mounted. With careful tuning of P and D parameters, I've managed to attain a sustained flight of about 30 seconds. But by 'sustained', I simple mean a test where the drone ain't toppling over to one side. Rock steady flight is still no where in sight, and more than 30 secs of flight also looks quite difficult. It wobbles from the beginning. By the time it reaches 20 - 25 seconds, it starts tilting to one side. Within 30 secs, it has tilted to one side by an unacceptable margin. Soon enough, I find it resting upside down As for the PID code itself, I'm calculating the proportional error from a 'complimentary filter' of gyro + accelerometer data. The integral term is set to zero. The P term comes to about 0.39 and the D term is at 0.0012. (I'm not using the Arduino PID library on purpose, just want to get one of my own PIDs implemented here.) Check this video, if you want to see how it works. http://www.youtube.com/watch?v=LpsNBL8ydBA&feature=youtu.be [Yeh, the setup is pretty ancient! I agree. :)] Please let me know what could I possibly do to improve stability at this stage. @Ian: Of the many tests I did with my setup, I did plot graphs for some of the tests using the reading from the serial monitor. Here is a sample reading of Roll vs 'Motor1 & Motor2 - PWM input' (the two motors controlling the roll): As for the input/output: Input: Roll and pitch values (in degrees), as obtained by a combination of accelerometer + gyro Output: PWM values for the motors, delivered using the Servo library's motor.write() function Resolution I resolved the problem. Here's how: The crux of the issue lied in the way I implemented the Arduino program. I was using the write() function to update the servo angles, which happens to accept only integer steps in the argument (or somehow responds only to integer input, 100 and 100.2 produces the same result). I changed it to writeMicroseconds() and that made the copter considerably steadier. I was adding up RPM on one motor while keeping the other at a steady value. I changed this to increase RPM in one motor while decreasing the opposing motor. That kinda keeps the total horizontal thrust unchanged, which might help me when I'm trying to get vertical altitude hold on this thing. I was pushing up the RPM to the max limit, which is why the quadcopter kept losing control at full throttle. There was no room for the RPM to increase when it sensed a tilt. I observed that one of the motor was inherently weaker than the other one, I do not know why. I hardcoded an offset into that motors PWM input. Thanks for all the support. Source Code: If you're interested, here's the source code of my bare-bones PID implementation: PID Source Code Please feel free to test it in your hardware. Any contributions to the project would be welcome.
Given a 12' x 12' field (4m x 4m), a reasonably cheap 3-axis gyro sensor and accelerometer, and compass, I plan to design a device capable of tracking its position to sub-centimeter accuracy for a minute of motion or so. The device has a holonomic drive system, capable of moving any direction at a maximum of about 8mph (3.6m/s), with a maximum acceleration of about 2g's. However, there are some simplifying constraints. For one, the field is nearly flat. The floor is made of a tough foam, so there is slight sinking, but the floor is flat except for a ramp of known angle (to a few degrees). The device will, excepting collisions, not be rising above the floor. Accuracy is preferred over simplicity, so any mathematics required on the software side to improve the system would be welcomed. Before I definitively choose accelerometers as the method of position tracking, though, I would like some idea of how much accuracy I could get, and the best ways of doing it.
I am a Computer Science student entering my last year of college. I'm pretty sure Robotics is what I want to eventually be doing based on my interests in AI and embedded systems. I've seen a lot of topics that covers Robotics such as: control theory, signal processing, kinematics, dynamics, 3D simulators, physics engines, AI, Big Data with machine learning. I'm hoping someone can point me in the right direction as to what I should be attempting to study in my interests of Robotics. I am not sure what other topics I have not mentioned that would be relevant. I would like to deal with the software side of Robotics, both AI and none AI. My other question is about machine learning. I've seen researchers applying machine learning (deep learning/unsupervised learning specifically) to robotics but how do they do this? Is information and data transferred from the internals of the robot to an external computer that does the data processing? Machine learning requires a lot of data to predict. Is this the only way machine learning can be used in robotics (through an external computer)? I hope someone can touch on some of the things I've mentioned, Thank you.
I need an app that can do live monitoring of whether each seat in an auditorium is occupied, so visitors can load the app and see where to sit. The auditorium has a relatively flat ceiling 4m high, and the seats are .5m wide. The hardware cost per seat needs to be $5. I'm looking for all solutions. Web cams, preasure sensors, sonars, lasers, arduino, pi, intel edison, anything. Obviously there cannot be wires that people could trip over. Sensors on the ceiling could have wired networking. Sensors on the seat or floor would need to have wireless communication. sensors on the ceiling would need to consider occlusion by people sitting in the seats (think, if there is an empty spot between 2 people, can the sensor see it as empty) In the end, the data needs to be collected as a simple list of which chairs are occupied/open Possible solutions: rasberry pi's on the ceiling every 8 seats with a camera. pressure sensors under chair legs wired to pi's gpio Drones flying around the auditorium :) Any ideas? Update (more constraints): auditorium size is 400 seats Installation costs should average 10 chairs per hour(400/10 = 40 hours) as the picture shows, chairs are cushioned regular maintenance should take no longer than 30 min. per 2-hour event(eg, batteries) hardware should last 100 sessions for auditorium cleaning, it should be possible to "disconnect" and "reconnect" the chairs with 4 hours of labor.
I'm trying to get an extended Kalman Filter to work. My System Model is: $ x = \begin{bmatrix} lat \\ long \\ \theta \end{bmatrix}$ where lat and long are latitude and longitude (in degree) and $\theta$ is the current orientation of my vehicle (also in degree). In my Prediction Step I get a reading for current speed v, yaw rate $\omega$ and inclination angle $\alpha$: $z = \begin{bmatrix} v \\ \alpha\\ \omega \end{bmatrix}$ I use the standard prediction for the EKF with $f()$ being: $ \vec{f}(\vec{x}_{u,t}, \vec{z}_t) = \vec{x}_{u,t} + \begin{bmatrix} \frac{v}{f} * \cos(\theta) * \cos(\alpha) * \frac{180 °}{\pi * R_0} \\ \frac{v}{f} * \sin(\theta) * \cos(\alpha) * \frac{180 °}{\pi * R_0} * \frac{1}{\cos(lat)} \\ \frac{\omega}{f} \end{bmatrix} $ $f$ being the prediction frequency, $R_0$ being the radius of the earth (modelling the earth as a sphere) My Jacobian Matrix looks like this: $ C = v \cdot \Delta t \cdot cos(\alpha) \cdot \frac{180}{\pi R_0} $ $ F_J = \begin{pmatrix} 1 & 0 & -C \cdot sin(\phi) \cdot \frac{1}{cos(lat)} \\ -C \cdot sin(\phi) \cdot \frac{sin(lat)}{{cos(lat)}^2} & 1 & C \cdot cos(\phi) \cdot \frac{1}{cos(lat)}\\ 0 & 0 & 1 \end{pmatrix} $ As I have a far higher frequency on my sensors for the prediction step, I have about 10 predictions followed by one update. In the update step I get a reading for the current GPS position and calculate an orientation from the current GPS position and the previous one. Thus my update step is just the standard EKF Update with $h(x) = x$ and thus the Jacobian Matrix to $h()$, $H$ being the Identity. Trying my implementation with testdata where the GPS Track is in constant northern direction and the yaw rate constantly turns west, I expect the filter to correct my position close to the track and the orientation to 355 degrees or so. What actually happens can be seen in the image attached (Red: GPS Position Measurements, Green/blue: predicted positions): I have no Idea what to do about this. I'm not very experienced with the Kalman filter, so it might just be me misunderstanding something, but nothing I tried seemed to work… What I think: I poked around a bit: If I set the Jacobian Matrix in the prediction to be the identity, it works really good. The Problem seems to be that $P$ (the covariance Matrix of the system model) is not zero in $P(3,1)$ and $P(3,2)$. My interpretation would be that in the prediction step the Orientation depends on the Position, which does not seem to make sense. This is due to $F_J(2,1)$ not being zero, which in turn makes sense. Can anyone give me a hint where the overcorrection may come from, or what I should look at / google for?
I am using a miniature car and I want to estimate the position. We can not use GPS modules and most of the tracking systems that I saw, are using IMU senson with the GPS module. In our car we are able to find our exact correct location with image processing but for some parts that dont have enough markings we can not do this. So we want to use the IMU as backup for our positioning. so as long as the positioning is close is good for us. And we are only interested in our 2D position since the car is on a flat ground. I am using a IMU 9DOF sensor and I want to calculate my movement. I have seen some amazing works with IMU for tracking body movements but no code or simple explanation is anywhere about it. So basically I have the reading from accelerometer, gyro and magnetometer. I also have orientation in quarternions. From the device I am getting also the linear acceleration but even when I am not moving it in any direction the values are not 0 which is really confusing. Can you please help me how to approach this? Thanks in advance
I want to make a quadcopter for my final year project and I am willing to use DC motors as the four rotors of the quadcopter. Can any one guide me about the ratings for proper motor selection for my job.
I have a 4 wheeled differential drive robot, like the Pioneer 3-AT. There are only two motors, one for left wheels and one for right wheels. I want to send velocity commands to the robot, I'm using ROS and the standard commands are: [linear_velocity, angular_velocity]. I need to convert them into left and right velocities, from literature if I had 2 wheels I should do this: $v_l = linear_v - \omega * |r|$ $v_r = linear_v + \omega * |r|$ where |r| is the absolute value of the distance from the wheels to the robot "center". How should I take into account that I have 4 wheels?
I'm simulating a sensor in 3D. The sensor should determine ($p, \theta, \phi$) from the origin where $\theta$ is the rotation about z-axis and $\phi$ is the rotation about x-axis. The sensor is given position of a point($x, y, z$). This is what I did p = sqrt(x^2 + y^2 + z^2); theta = acos(z/p); <---- I'm guessing the problem here phi = atan2(y,x); Now I need to get the Cartesian coordinates ($x',y',z'$). This is what I did [p theta phi] = getmeasurement(x, y, z); x' = p*cos(theta)*sin(phi); y' = p*sin(theta)*sin(phi); z' = p*cos(phi); The sensor is working fine at the beginning but at a particular point it behaves strangely. I have the state vector to compare it with the measurement. I'm guessing that $\theta$ might be the problem. Edit: I'm sorry for this mistake. The aforementioned calculations based on the following picture So, the point will rotate first about z-axis ($\theta$) and then rotate about x-axis ($\phi$)
I have a quadcopter equipped with PX4FMU board. You may download its datasheet from HERE. I wonder whether it is possible to program the quadcopter to autonomously follow a path like circular motion without any human interference. Are the built-in sensors enough for this task? I also wonder how accurate the built-in GPS is? I read that it gives coordinates with a radius of 5m as error.
My question is very broad. However I would like a complete description to the very last detail in a way that a foreign exchange student would understand. I want to try my best to master the way the Kalman Filter works. Please be as through as you possibly can, and more.
I am an aerospace engineer (currently in grad school) and I really want to get into (embedded) electronics. But I have this problem: I understand the theory fairly well, I took an edX course in circuits and had no problem. I can build projects from the internet. However, I have a very hard hard time connecting the theory with the practical part, understanding why projects are done the way they are done and I have a hard time to design my own projects! Please help! I'd appreciate the following: -General tips: how did you learn it? How is your workflow? What should I do? Which steps should I take? -Books: which hands on books and websites can you recommend? I am looking for books and website that are practical but also explain the why -Kits: What kits can you recommend that combine the theory with the practical? -Anything you think is important Thank you for your time!
I'm implementing Monte-Carlo localization for my robot that is given a map of the enviroment and its starting location and orientation. Mine approach is as follows: Uniformly create 500 particles around the given position Then at each step: motion update all the particles with odometry (my current approach is newX=oldX+ odometryX(1+standardGaussianRandom), etc.) assign weight to each particle using sonar data (formula is for each sensor probability*=gaussianPDF(realReading) where gaussian has the mean predictedReading) return the particle with biggest probability as the location at this step then 9/10 of new particles are resampled from the old ones according to weights and 1/10 is uniformly sampled around the predicted position Now, I wrote a simulator for the robot's enviroment and here is how this localization behaves: http://www.youtube.com/watch?v=q7q3cqktwZI I'm very afraid that for a longer period of time the robot may get lost. If add particles to a wider area, the robot gets lost even easier. I expect a better performance. Any advice?
As far as I can tell, an ultrasonic rangefinder works by reflecting inaudible soundwaves off of objects and timing their return. But if the object has a flat surface and is angled with respect to the line to the rangefinder, how does it detect that object? Under what circumstances might it give a false distance or otherwise fail to detect the object?
This is actually a very simple question, but I'm lost at the moment. I was using a beaglebone black for a school project. It controls a bunch of motors and actuators,etc. We wrote everything in C++, and made libraries of functions. When a main program calls them, the functions run just fine. Recently we have been told to demo our progress so far. The main program is nowhere near done, so we were thinking of some sort of web interface that can execute the complied C++ program on command. We were hoping to get the server hosted on the board, and access it via LAN from other PCs. But I've never done this before and have no idea where to start. Does node.js (with the 'bonescript') going to be of any help? Or is there a simpler way with basic HTML? I only have a few days to figure it out, so I didn't want to waste time looking at the wrong methods.
I would like to use a Kuka/abb 6 axis robot and a machine vision system to pick and place a variety of metal drill bits in size ranges from 0.5mm (ascending up 0.5mm per cylinde) to 13mm in metric and then 1/16 of an inch ascending to 9/64 of an inch. The machine would not have to differentiate between the bits, niether drill bit would weight more than 1kg. Crucially at the beginning and end of the picking and placing I would like to inspect the very tip of the cylinders to be inspected for a 118 degree chamfer on one end of the bit which should be present regardless of drill diameter of length. I am lead to believe that if the drill bits are placed end up on a conveyor belt and always the the same place its relatively low cost but crucially if the kuka 6 axis robot has to find the drill bits then the cost increases dramatically, is this true?
I want to make a copy of this machine Fisher Price Soothing Motions™ Glider and I'm wondering what motor to use? Simple DC motor with appropriate gearbox (slow rpm) or stepper motor? Here is another instance of this idea.
I have very limited experience with sensors or robotic components at all, and I hope you will excuse the lack of detail in this Question. I want to set up posts around my yard with electronic noses that detect dog urine. I want to use this information to make a map of my yard from a dogs perspective. Is it possible with todays technology? What would it cost? There may be information that is very relevant to me, but that I'm not requesting. This is because of lacking insight. If there is something you think I should consider or research, please say so.
I am working on a line follower robot as part of my Microelectronics project, and am confused over what sort of code to use to program the "pic18f" microcontroller I'm using. Can someone give me source code or a layout of the code and what should be in there?
robotics enthusiasts! I'm a member of a team which has to develop a mobile rescue robot to cooperate with firemen (e.g. on earthquake sites). The problem we have is connection of a commander post with the robot. The robot has to enter buildings, so it is desirable that the connection can go through several decimeters of walls and have a reach 50-100 meters. On the other hand, we need to send a lot of data (camera images, point clouds, maps) which can easily eat 10 Mbps or more. At this time we use 2.4 GHz WiFi for this connection. As for the speed, with direct visibility it seems to be sufficient, but only when a single robot is operating (we can use up to 3 non-overlapping channels, so in theory 3 robots can work together; but usually you have the environment messed up with home routers). We need at least 5 robots to be operating simultaneousely. We have tried 5 GHz WiFi, but it has problems with penetrating walls, so it can only be used for UAVs. My idea was to use some mobile connection technology like LTE. I found that LTE can be run on 800 MHz, which could be great for the wall penetration performance. I also found that the LTE's theoretical upload speeds (for clients) are 70 Mbps, but nobody says if it is on 2.6 GHz and how would it be changed when running LTE on 800 MHz. Moreover, we cannot rely on some provider's coverage. I have found that you can build your own LTE transmitter for about €2000, which seems to be interesting to us. Maybe it is possible to build it even cheaper. But we think both 2.6 GHz and 800 MHz are regulated frequencies. However, the cooperation with firefighters could persuade local regulators to give an exception to us to setup our own small LTE base station. And now to the question: do you think such setup would give better results than using WiFi? Or do you know of any other technologies that would help us to either increase the bandwidth or the wall penetration performance? What are their cons and pros?
I am a beginner in robotics, and I am learning about the Kalman filter. I do not seem to get it, though. I am a mathematician, and so it would be helpful if the Kalman filter could be explained in a mathematical method.
I have a steam radiator at home and it has a valve similar to the picture below. Please note that the valve doesn't have grooves on top to attach things to. I want to build something to turn it on and off depending on the temperature at certain points in the room. I have that taken care of but cannot find a way to attach a actuator(actuator is the right word in the context I guess?) to turn the valve in both directions. Also It is a rented apartment so I would like to avoid making any modifications to the radiator itself.
I have zero experience with robotics, but I need to build a mobile platform for a streaming camera. The idea is that I'll plug in my Android phone into the pan/tilt unit on my wheeled robot and then drive and look around via WiFi. I have already solved all of the software, interface and controller issues, but I would appreciate some advice on how to build the wheeled platform. My initial idea was to buy a cheap RC car, remove all electronics and replace them with my own. This approach almost worked. I purchased this New Bright F-150 Truck. The size is good and there is plenty of storage space: However, I quickly ran into a problem with this thing. I assumed that the front wheel would be turned by some kind of servo. Instead I found this nonsense: That small gear shaft is not driven by a servo - it's a conventional motor, which spins until it is jammed at the extremes of travel. The wheels are straightened when power is removed by a small spring on the other side. This means that there is only one angle at which the wheels can be turned, and that angle is way too small for what I need. So using this RC car will not work. Before I start buying more things, I would like to hear some opinions from more experienced people. Am I on the right track? Do I simply need to get a better RC car, or are they all designed like this? Perhaps there are other options that would be more suitable for what I am doing?
I'm struggling with the concept of covariance matrix. $$ \Sigma = \begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{x \theta} \\ \sigma_{yx} & \sigma_{yy} & \sigma_{y \theta} \\ \sigma_{\theta x} & \sigma_{\theta y} & \sigma_{\theta \theta} \\ \end{bmatrix} $$ Now, my understanding for $\sigma_{xx}$, $\sigma_{yy}$, and $\sigma_{\theta \theta}$ that they describe the uncertainty. For example, for $\sigma_{xx}$, it describes the uncertainty of the value of x. Now, my question about the rest of sigmas, what do they represent? What does it mean if they are zeros? I can interpret that if $\sigma_{xx}$ is zero, it means I don't have uncertainty about the value of x. Note, I'm reading Principles of Robot Motion - Theory, Algorithms, and Implementations by Howie Choset et. al., which states that By this definition $\sigma_{ii}$ is the same as $\sigma_{i}^{2}$ the variance of $X_{i}$. For $i ≠ j$, if $\sigma_{ij} = 0$, then $X_{i}$ and $X_{j}$ are independent of each other. This may answer my question if the rest of sigmas are zeros however, I'm still confused about the relationship between these variables for example $x$ and $y$. When does this happen? I mean the correlation between them. Or in other words, can I assume them to be zeros? Another book namely FastSLAM: A Scalable Method ... by Michael and Sebastian which states The off-diagonal elements of the covariance matrix of this multivariate Gaussian encode the correlations between pairs of state variables. They don't mention when the correlation might happen and what does it mean?
I'm trying to make a quadcopter move laterally at a certain angle. I've been able to find the proper roll and pitch angles for this (that work with a yaw of 0°); how would I adjust these values to compensate for a different yaw?
From each step of my vision code I am able to get around 400 coordinates of where the robot thinks the walls are I want to integrate this into Monte-Carlo observation step. I'm storing the map of the maze as a set of Line segments. What would be a nice way to implement the sensor update, i.e. given the position (x,y) of the robot what is the probability that it is found there given the above described coordinates of the walls. The main idea I currently have: Transform points in polar coordinates. Then for each point (from vision output) compute a ray with this angle and find the first intersection with the maze. Now we have the predicted distance and real distance and we can compute the probability that this measurement is right. The main drawback is that this is slow. For each point from vision output I have to iterate over all line segments to find the one with the closest intersection. The line segments number is around 50. So it gets to O(400*50*Particle number).
I want to capture two views of same scene. The scene consists of a set of objects kept on a table. From the two views, I wish to calculate homography for image matching. I want to know what is the maximum angle between the two views such that the homography can be accurately calculated. Right now, I am capturing the images at around 60 degrees of angle, but unable to construct homography accurately.