instruction
stringlengths
40
28.9k
I am going to start a new project consisting in implementing an autonomous driving RC car. The car as it is now, has a camera installed on each side of the car, i.e. 4 cameras in total. They are connected to a board which is able to read and process the video input. I have been researching about obstacle detection using a single camera (without stereo cameras, e.g. Single camera vision and mapping system ) and although it seems possible it also seems quite complex. Modifying the cameras set-up is not an option. I already have some video processing algorithms, like dense optical flow, which might help me, but I am not sure whether I might me able to implement the system in the time I have (4 months). I also don't know how reliable would be the final solution. If the first approach is not feasible, as an alternative option I also could install distance sensors in the car to detect obstacles. It seems that usually the most preferred choice is to use ultrasonic sensors. I would need to install them and I would not take advantage of the cameras, but it seems that the final complexity would be lower. Is the first approach feasible? What are the pros and cons of each approach? If I implemented the second option, how many sensor would I need?
I have a project where I need a motor that can turn some number of rotations which will spool up a cable attached to a spring-closed device to open it up. When power is disconnected, the spring closure will cause the spool to unwind and the device to close. In the closed position, no power is available. (i.e. The closure mechanism needs to be 100% passive.) In order to keep this open for some time, I will need a motor that is capable of being stalled for long periods without having the windings burn up. I know some motors can do this, such as the motors they use on spring closed HVAC dampers, but I don't know how to find them or if there's a particular name I should be using to find them. I know I could probably do this with a stepper motor, but that seems overkill for the application. The only requirements are higher torque to open this mechanism, no gearing that prevents the motor from spinning when power is disconnected, and the ability to be stalled.
We implemented a PID controller for our quadcopter which allows us to fly from point A to B. The precise position of the quadcopter is measured using an external tracking system. Now that we can fly from A to B, we would like to implement a controller to fly more complex trajectories with multiple set points. E.g. from A to B to C or flying in a circle using sample points. We tried to use our regular PID controller for this but this of course doesn't work well since the PID controller forces the quadcopter to stabalize at any set point. We would like to have a controller that allows the quadcopter to fly a trajectory fairly smoothly. I think this has to be a controller that takes into account multiple set points in the trajectory at the same time so that it can already slow down/speed up based on the trajectory that is ahead. Can someone point me to some controllers / algorithms that I can look at to realize this? Do I need a completely different controller to do this or will it be an adapted version of the PID controller that I have now?
In order to identify the dynamics of my DC motor, I am trying to command it with Xcos using the Arduino tool box. The problem that I am facing is how to give the motor an input command such that I get some given angle position as output. I can only control the input voltage to the motor via PWM. I have been thinking about converting the angle to voltage but I can't figure it out. Can somebody help me?
Apologies if this is a stupid question, but if I have a 3-axis magnetometer, and I calculate the vector magnitude as sqrt(magX * magX + magY * magY + magZ * magZ) ...then should I not always get the same value, regardless of the sensor's orientation? Mine is all over the place, and I feel as though I'm missing something obvious.
I am designing a pan-tilt camera mount using standard hobby servos. Many existing designs use the servo shaft as a revolute joint, as opposed to simply a torque producing element. As a revolute joint the servo mechanism is subject to different torques and forces. Is using a servo shaft as a revolute joint recommended practice or should a bearing be used?
I am beginner in Robotics .I have taken admission for Electronics engineering one year back as we don't have specifically Robotics Engineering Branch in my Country.Now I am suffering from questions like what is the scope of Electronics( not Electrical) Engineering in Robotics/ Automation?I am unable to distinguish between the role of Electronics engineer and Computer Engineer in Robotics as in both cases programming is required Also,if I don't like to do programming(coding),are there any other options to stick to Robotics / Automation field as per my branch(Electronics Engineering ) is concerned?.
I'm not entirely sure if this is the right area to post this question, but looking at the other subjects on StackExchange, this seems to be the best fit. I am a complete beginner to hydraulic systems, and I've wanted to learn more about this area. I'm designing a hydraulic system that involves using hydraulics to push/pull objects using pistons. I have looked at what the basic requirements are for a hydraulic system, but there is one thing that escapes me. I come from an electronic background, and I noticed that the hydraulic pumps (for example, this one) seem to lack a motor to drive the fluid. Am I wrong? If not, I've been looking everywhere for a motor that can/should be attached to said pump, but I cannot seem to find anywhere that sells them. Is it just a simple DC motor (with correct specs), or should there be a specific motor designed for hydraulic pumps? Looking around, I came across this, but looking through the specs, I don't see a power requirement, and being used to seeing power consumption in datasheets, I'm not even sure it is a motor!
I will be using at least one programmable real-time unit (PRU) to send pulses to a stepper motor driver but before I begin, I am trying to lay out the structure of my programs. I am using this library PRU Linux API for loading assembly code into the PRU instruction memory but there doesn't seem to be much documentation other then whats at that wiki and the source: github-pru-packageh My c program will be calculating the position of the sun using an algorithm and executing the assembly/writing a pulse count to the PRU(s) data memory so they can just switch on/off a gpio at my desired frequency and for the number of pulses required to turn a stepper the appropriate number of steps. I am not even sure if this is an acceptable method but I am pretty new at this and it seems like a simple way to accomplish my task My Questions regarding the library functions are: Is there a significant performance difference between using prussdrv_map_prumem or prussdrv_pru_write_memory to give the PRU(s) access to the pulse count? Would it be better halt the PRU assembly program after has completed the tasks for each pulse count then re-execute it with new values, or keep the PRU program running and poll for a new pulse count to be written in? I plan to send a pulse count every 10 seconds or so. Any suggestions on revisiting the whole structure and logic are welcome as well.
For example, I have a brush-less outrunner with 14 poles and 12 stator windings. It has three connectors. Can this motor be controlled in a way that it performs a single step of 30 degrees (360/12)?
I know some languages like PHP, C/C++ and Java but I'm not expert in these languages. I want to create an Artificial Intelligent Robot that can do these Task; able to communicate with Computer (USB, Bluetooth or other) able to perform some specific task present a Visual interface (finding path, speed and others) Access its Micro Controller device and attached devices and so on.. (editor note: solve world hunger?) Can any one please suggest which programming language will be good for programing this type of robot. I have heard about C/C++ and Assembler and ROBOTC and LABVIEW but I am unable to decided which language to use for my project. Sorry for my bad English!
I'm trying to understand how an electronic musical instrument (called an e-chanter) works (imagine a recorder or other wind instrument, but with the holes replaced with metal contacts, and the sound played electronically, so no air is needed). Basically, there are several metal contacts, as shown in this link: http://www.echanter.com/home/howto-build#TOC-WIRES-SCREW-SENSORS They each appear to be wired only directly to one pin of the arduino: I can't figure out for the life of me how this works. Can anyone explain it, are the fingers being used as some kind of ground or what on earth is going on. I have a physics background so can understand some technical info, but just can't fathom how on earth this magic works. Thank you very much
I am trying to derive the analytical Jacobian for a system that is essentially the equations of motion of a body (6 degrees of freedom) with gyro and accelerometer measurements. This is part of an Extended Kalman Filter. The system state is given by: $ \mathbf{x} = \left( \begin{array}{c} \mathbf{q}\\ \mathbf{b_\omega}\\ \mathbf{v}\\ \mathbf{b_a}\\ \mathbf{p}\\ \end{array} \right) $ where $q$ is the quaternion orientation of the body expressed in the global frame, $b_\omega$ and $b_a$ are the biases in the gyro and accelerometer respectively (expressed in the body frame) and $v$ and $p$ are the velocity and position of the body expressed in the global frame. All vectors are [3x1] except $q$ which is [4x1] in $[w,x,y,z]^\top$ format, and $R$ (below) which is [3x3]. The equations of motion $\frac{dx}{dt}=\dot{x}$ (t is time) are: $$ \dot{\mathbf{q}} = \frac{1}{2}\mathbf{q} \otimes \left( \begin{array}{c} 0\\ \hat{\omega}\\ \end{array} \right) \\ \dot{\mathbf{b_\omega}} = 0 \\ \dot{\mathbf{v}} = R^\top (\hat{\mathbf{a}} + [\hat{\mathbf{\omega}}\times]R \mathbf{v})+ g \\ \dot{\mathbf{b_a}} = 0 \\ \dot{\mathbf{p}} = \mathbf{v} $$ Second-order terms are ignored. $\hat{a} = a - b_a$ and $\hat{\omega} = \omega - b_\omega$ are the corrected accelerometer and gyro biases ($a$ and $\omega$ are known) and are expressed in the body frame. $R$ is the rotation matrix (DCM) formed from $q$ and $g$ is the gravity vector $[0,0,9.81]^\top$. These equations have been validated against an aerospace engineering software library. I need the jacobian $F = \frac{d\dot{x}}{dx}$ but I cannot find this jacobian in any texts (I do find the error-state jacobian eg this paper). I am struggling with doing this myself because I don't know how to handle the quaternion norm constraints. I also am concerned about the validity of a solution given through numerical differentiation. Any help or explanation would be greatly appreciated. This is going towards an open-source robot localisation project.
I am a complete newbie and recently joined a robot team at my school in order to gain some experience. I have been assigned a task of driving a servo using a Pololu Mini Maestro USB Servo Controller. I am using the BeagleBone Black (BBB) with the Python adafruit library. How do I make the BBB communicate with the Servo Controller? If you guys could point me in the right direction, I'd really appreciate that. Right now, I don't even know where to start. Not sure if it matters, but this is the servo I am using: https://www.pololu.com/product/1053
In my project, I've successfully analyzed the arena and have detected the obstacles using an overhead webcam. I also have computed the shortest path. The path data is transmitted to the robot via Zigbee, based on which it moves to its destination. The problem is: My robot is not taking accurate turns which would cause error in the rest of path it follows. Could anyone please suggest any methods/techniques for feedback from the robot so as the path is corrected and the robot follows the original path computed without any deviation? (Basically a tracking mechanism to avoid deviation from the original computed path)
Bosch, FreeScale, InvenSense, ST and maybe others are releasing 9-dof AHRS platforms containing their own fusion software and outputting filtered/sane/fused data (attitude as quaternion and linear acceleration). I would like to use these for the quality of their respective company fusion algorithm. And would like to merge GNSS position and velocity data to it. I have found multiple examples of heavy (> 20) states Kalman filters merging raw 9-dof IMU data and GNSS position/velocity. But I have a hard time finding a computationally lighter version of GPS+AHRS fusion as these new 9-dof AHRS already fuse IMU raw data themselves and this process should'nt be done twice. Would you maybe have pointers on the algorithm(s) or type of filter to use ? Thank you.
Currently I am building a robots with 2 incremental encoders with a optical mice sensor. The reason to install a optical mice sensor is to provide better feedback when slippage happen on the encoders. I wonder if I could apply a kalman filter to get a better distance feedback with these 2 kinds of sensors? Especially when the control input is unknown?(For example I push the car with my hand, but not applying a voltage to the motors) I have read some examples to use kalman filter (gyro+accel / encoder+gps), either one of the variable used is in absolute measurement, while in my case, two feedbacks are dead-reckoning. Any help is appreciated =] !!!!!
Good day! I am helping my little ones 6 and 7 to develop a robot that can pick up and stack cubes three high as well as gather blocks. They came up with the design that enables them to pick up three cubes at a time when lined up and then pull up the claw, turn, drive to another cube, place the cubes on the stand still cube and release. Well they got the claw made with two rods connected to gears to motor and the rods reinforced, then they made insect like legs 3 - pairs of two on the rods, with gripper feet pads on the ends of the legs. All of this works as it opens and closes! The problem is that when they try to close the claw on the cubes and pick up all three cubes, the first - closest to motor feet have a nice tight grip, the second - middle feet - have a lighter grip and just barely can lift the block, and third - farthest from motor doesn't even have a grip on the blocks. I think it's because the second and third set of feet are farther from the motor. But how can they evenly disperse the tension load so the claw can pick up all three blocks? I tried putting elastics on the feet for better grip and unless we put ten on each foot for the third set and maybe five on the second set it wont work. Even though it's a quick fix I would like to help them figure it out the proper way of spreading the load so to speak. We also tried putting a small band on the third set of legs. The robot could still open and close and that worked for the third set but not the second. We tried putting a band on the second and third but the legs wouldn't open anymore. I could use a lighter band but is there another way? We only have one little motor to run it so we can't give all the leg sets it's own motor and even if we did there would be weight issues. THank you in advance!
As a non-native speaker I have a (maybe trivial, but to me not clear) question concerning the verb 'to teach'. Of course, from school (and online dictionaries) I know the past tense of 'teach' is 'taught' not 'teached'. But in robotics 'to teach' has a special meaning (like: 'to make special ponts/orientations known to the (arm-)robot', e.g. by guiding the robot to those points/orientations.) Does it make sense to have a different past tense for 'teach' (i.e. 'teached') in this case ? Maybe a reference were it is used/explained ? (I would say 'No. The past of teach is taught, and that's it.', but some of my colleagues - also not native speakers - have a different opinion.)
Any one can advice me on the ideal perception sensors for pick and place application using a robotic manipulator with ROS support. I have generally looked at things like Kinect and stereo cameras (Bumblebee2) which provide depth that can be used with PCL for object recognition and gripper positioning. Are there any other sensors would be preferred for such application and if not what are the drawbacks of stereo cameras in comparison to Kinect or other sensing capability. Thanks, Alan
We are building a hobby drone(Quad-coptor) with a camera for footage. So to control the quad, i have been suggested(Web,Here) to use minimum of four channels.For power and for turning etc., So which means i need one channel for every separate task to be done. For eg., if i want to rotate the camera, then i suppose i need the 5th channel and so on.. Now my question would be - i have seen a lot of drones(ardrone, walkera) which are controlled by an Android or an iPhone app. So the wifi used to connect the drones is, single channel or multi-channel? If single channel then how can i control different tasks like to control turning of the quad or camera in different axis? Also if i want the GPS location from the quad, do i require to have another transmitter? I am using(Planned to) a Raspberry Pi 2, OpenPilot-CC3D for flight control. P.S this is my first drone, Kindly show some mercy if i ask/don't understand your comments.
Though, Denavit-Hartenberg notation is commonly used to describe the kinematics of robot manipulator, some researcher prefer the product of exponential instead; and even the claim that it's better. Which one should I use, and which one is generally better; is final solution same for both kinematics and dynamics? Any suggestions? A mathematical introduction to robotic manipulation
The first time after importing a project into the Eclipse workspace we find that eclipse cannot find the WPILIBj. On any import line:import edu.wpi.first.wpilibj.* Eclipse says "unresolved import edu.
In this picture, a sketch of a quadcopter is displayed with rotor's direction of motion. The magnitude of the rotational velocity is depicted by the thickness of the lines (thicker lines are higher velocity, thinner lines are lower velocity). I'm told this is the correct way to produce turning motion, but my intuition (which is usually wrong) tells me that the two pictures should be reversed. My argument is as follows: For the picture on the left, the two rotors of higher velocity are spinning clockwise. If the motion of the rotors of greater velocity are clockwise, shouldn't the quadcopter also rotate clockwise? What am I missing here?
I'm an electronics newbie. The part I'm talking about is this: DC 10-50v 12V 24V 48V 3000W 60A amps DC Motor Speed Control PWM HHO Controller My question is, can this type of part be used to control brushless DC motors?
I'm new in forum and since I made some research the past days I'd like to get some guidance about constructing & programming a Quadcopter from scratch since I'm completely new on a project like that. Quadcopter Frame: Thinking about to construct an aluminum 70cm diameter frame which will weight around 500g. What kind of motors should I get in order the frame with the board,motors etc. will be able to lift? Board: I'm thinking to use Arduino Uno or Raspberry Pi 2.0 ( With a little bit of research I made I conclude that Raspberry could make my life a little bit easier since you can add wifi on it. The quadcopter will be controlled via a pc/laptop through wifi). What can you suggest and why? ESC: As far as I've seen in most of similar projects people using ESCs in order to control the motors throttle. Can you avoid that, with programming PIDs that make the same job in order not to use more hardware? About PIDs and Code in General: Thinking about to simulate the whole project in Simulik, Matlab and somehow (if it's possible) to convert the Matlab Code into C++ and download it on the chip. What do you think about that? About the whole project: I'm trying to minmize the hardware as much as it's possible (use only 4x motors, the board with the chip on it, cables and probably some sensors) in order to minimize the total weight of construction and ofc the price. That's all for a start. I'm gladly waiting for your answers and ideas.
I am beginner in Robotics.I want to make serious start from scratch with interest but confused from where to start.So can anyone give some suggestions for 1.As a beginner in robotics ,are there some simple and basic robots or circuit designs which I can make by myself in the home(so that I can gain practical knowledge of robots)? 2.or should I first read books (can anyone suggest some good reference book names,articles ,links,free on-line video lecture series)?
I have recently been studying Kalman filters. I was wondering that if sensor model of a robot gives a unimodal Gaussian ( as is assumed for LKF) and the environment is pre-mapped, then the sensor reading can be completely trusted( ie. max value of Kalman gain), removing the need for odometry for localization or target tracking purposes and hence the need for the Kalman filter. Please Clarify.
In robot kinematics, we have $e^{(\theta*twist)}$, where $twist$ is 6*1 vector. How do I get the 4 by 4 transformation matrix by using product of exponentials?
I posted a similar question before as I was just getting started with the project but I wasn't specific enough so got a weak response from the SE community. But now I am at a point where I have Python code which is supposed to rotate a servo through Pololu's Maestro Serial servo controller (https://www.pololu.com/product/207). Based on the "Serial Interface" section of the user's guide (https://www.pololu.com/product/207/resources), I sent a sequence of numbers starting with decimal 170 and 12, which are the "first command byte" and the "device number data byte", respectively. The user guide says that 12 is the default device number, so I tried changing it to 18 because that's how many servos my servo controller can drive. But that doesn't make much difference because the servo doesn't rotate at all. The numbers after that are the same as the example from the user's guide. I am not sure what the 4, 112 and 46 are doing, but the 0 targets the servo port "0" on the servo controller (the port to which my servo is connected). The servo doesn't move, regardless of what sequence of numbers I put in. I have very little experience, so if you guys could point me in the right direction or at least point to some useful resources on the web, I'd be very grateful. import serial import struct import time import Adafruit_BBIO.UART as UART drive_Motor_Port = serial.Serial(port = "/dev/ttyO1", baudrate=9600) drive_Motor_Port.close() drive_Motor_Port.open() drive_Motor_Port.write(chr(170)); # User guide says we must start with OxAA = 170 drive_Motor_Port.write(chr(18)); # Device number drive_Motor_Port.write(chr(04)); drive_Motor_Port.write(chr(00)); drive_Motor_Port.write(chr(112)); drive_Motor_Port.write(chr(46)); time.sleep(5);
What is the most common ZigBee IP Modules to create full wireless mesh mode? I know that it should be 1) coordinator and 2) router to create full-mesh But I am interesting about what kind of modules it would be better to buy by comparing price, quality and tutorial material. It would be graet if you know some ZigBee modules based on ARM(Cortex) or Atmel MCUs and if you have some additional tutorial materials to control those modules and understand it. I am looking for ZigBee modules only, but NOT XBee!!! because they have difference in organization of network. Ex.: First of all, ZigBee Mesh can create own zones to control devices. but in Xbee Mesh which from Digi Comany can create only full mesh and only one big or huge zone to control devices. Secondly, ZigBee modules - AES encryption. Can lock down network and prevent other nodes from joining. Xbee - AES encryption.** and it is coming soon...
I am looking for cheap wheeled robot that can be controlled remotely. I do not really care about how (RF, Bluetooth, WiFi, IR, other?), as long as I can control around 10 robots without interference in a small arena (they are always in the line of sight). I would like to emphasize that I do not need them to be programmable and it is important that they are cheap.
I have been asked to write code to implement serial communications with a camera in order to control its pedestal (movable base) as well as set a few dozen other camera options. The catch is that I have to make it usable by ROS. What would be the best practice to implement this functionality in ROS? I understand the concept of services, but I think that there should be a better way than creating a different service/file for each option. Thanks, Daniel.
Note: I'm a firmware developer experienced with sensors and networks, but not much with motors. I am trying to build a small hobby robot, like a cat-sized spider. I am thinking of using servo motors with position control, so I don't have to use encoders to know where the motor is. Assuming six legs (I know, spiders have eight), with each leg being able to move up-down and left-right, that already translates to 12 motors. If you want to bend a knee, that gets the number to 18. 18 motors on such a small robot is overkill, isn't it? I have thought of a couple of ideas, but not having a strong mechanical background, I cannot tell whether they are doable/sane. One of my ideas is to use a magnet on the end of the limb (the end inside the chassis) and a small permanent magnet above it. The magnets attract each other and this keeps the limb firm under the weight of the robot. A stronger controllable magnet (a coil) would attract the limb even more and let it lift in the air. The following drawing may help: This would allow the up-down movement of the leg, and a servo would control its left-right movement. However, I fear that such a system would not be strong enough to hold under the weight of the robot, or whether a reasonable coil would be compact enough. In short, my question is, how can I control six legs each with two or three degrees of freedom with a reasonable number of motors? Is having one motor per degree of freedom the only possibility?
I would like to control my 7 DOF robot arm to move along a Cartesian trajectory in the world frame. I can do this just fine for translation, but I am struggling on how to implement something similar for rotation. So far, all my attempts seem to go unstable. The trajectory is described as a translational and rotational velocity, plus a distance and/or timeout stopping criteria. Basically, I want the end-effector to move a short distance relative to its current location. Because of numerical errors, controller errors, compliance, etc, the arm won't be exactly where you wanted it from the previous iteration. So I don't simply do $J^{-1}v_e$. Instead, I store the pose of the end-effector at the start, then at every iteration I compute where the end-effector should be at the current time, take the difference between that and the current location, then feed that into the Jacobian. I'll first describe my translation implementation. Here is some pseudo OpenRave Python: # velocity_transform specified in m/s as relative motion def move(velocity_transform): t_start = time.time() pose_start = effector.GetTransform() while True: t_now = time.time() t_elapsed = t_now - t_start pose_current = effector.GetTransform() translation_target = pose_start[:3,3] + velocity_transform[:3,3] * t_elapsed v_trans = translation_target - pose_current[:3,3] vels = J_plus.dot(v_trans) # some math simplified here The rotation is a little trickier. To determine the desired rotation at the current time, i use Spherical Linear Interpolation (SLERP). OpenRave provides a quatSlerp() function which I use. (It requires conversion into quaternions, but it seems to work). Then I calculate the relative rotation between the current pose and the target rotation. Finally, I convert to Euler angles which is what I must pass into my AngularVelocityJacobian. Here is the pseudo code for it. These lines are inside the while loop: rot_t1 = np.dot(pose_start[:3,:3], velocity_transform[:3,:3]) # desired rotation of end-effector 1 second from start quat_start = quatFromRotationMatrix(pose_start) # start pose as quaternion quat_t1 = quatFromRotationMatrix(rot_t1) # rot_t1 as quaternion # use SLERP to compute proper rotation at this time quat_target = quatSlerp(quat_start, quat_t1, t_elapsed) # world_to_target rot_target = rotationMatrixFromQuat(quat_target) # world_to_target v_rot = np.dot(np.linalg.inv(pose_current[:3,:3]), rot_target) # current_to_target v_euler = eulerFromRotationMatrix(v_rot) # get rotation about world axes Then v_euler is fed into the Jacobian along with v_trans. I am pretty sure my Jacobian code is fine. Because i have given it (constant) rotational velocities ok. Note, I am not asking you to debug my code. I only posted code because I figured it would be more clear than converting this all to math. I am more interested in why this might go unstable. Specifically, is the math wrong? And if this is completely off base, please let me know. I'm sure people must go about this somehow. So far, I have been giving it a slow linear velocity (0.01 m/s), and zero target rotational velocity. The arm is in a good spot in the workspace and can easily achieve the desired motion. The code runs at 200Hz, which should be sufficiently fast enough. I can hard-code the angular velocity fed into the Jacobian instead of using the computed v_euler and there is no instability. So there is something wrong in my math. This works for both zero and non-zero target angular velocities. Interestingly, when i feed it an angular velocity of 0.01 rad/sec, the end-effector rotates at a rate of 90 deg/sec. Update: If I put the end-effector at a different place in the workspace so that its axes are aligned with the world axes, then everything seems works fine. If the end-effector is 45 degrees off from the world axes, then some motions seem to work, while others don't move exactly as they should, although i don't think i've seen it go unstable. At 90 degrees or more off from world, then it goes unstable.
I'm wondering about good software/package to draw the robot manipulator and indicate DH parameters and different axes? Any suggestions!
A professor in my university is asking me to study robotics with him. By robotics I understand programming a robot to move around, avoid obstacles, figure out a maze, etc.. He sent me some manuals for Khepera II. When I first read the specs, I was surprised by the low specs: Motorola 68331 CPU @ 25 MHz 512 KB RAM 512 KB Flash But then I looked at some of the new Arduino boards and they had similar specs. So maybe that's OK, I guess the CPU speed and RAM aren't that important if I'm going to control the robot from a normal computer that can handle real time computation. What about the software? I glanced at the manuals and saw only C and assembly code. Khepera I is from 1995 and Khepera II is from 2001. I think robots have advanced much since 2001. Is using Khepera II adequate for university level learning, considering I can probably give 200-300$ for a newer one? I ask in terms of hardware of the board, motors and sensors, as well as in the programmability. This question might seem vague. I'm ready to improve it by giving more detail upon request.
I am looking for sensors to give me the position of a ball on a plate in order to make a ball and plate problem . What came to my mind is to use image processing but since i never did some serious image processing i don't know if it is a good idea. Eventually can you please help me to find some 'cheap' sensors in order to get the position of the ball on the plate. Thank you for your attention.
here is what I did on Ubuntu 14.04 LTS running on a Toshiba Satellite, intel i7, nvidia with usb 3.0 and 2.0 ports. 1, 2, 3 refer to scripts found here Setup ros by running this install-ros.sh script Setup opencv by running this install-opencv.sh script Setup PCL by running this install-pcl.sh script Installed libfreenect2 via the instructions at the master branch Made the changes and installed Cloned the repository into an empty catkin workspace Sourced the respective setup.zsh files from my /opt/ros/... and from the devel/ folders. At this point, I have encountered no issues I tried running: rosrun kinect2_bridge kinect2_bridge and I get the following message: [ERROR] [1424391698.413758209]: [registerPublisher] Failed to contact master at [localhost:11311]. Retrying.. So I assume I need to run roscore or something like that. So if I run: roscore in one terminal and: rosrun kinect2_bridge kinect2_bridge in another terminal, I get the following segmentation fault: [ERROR] [1424393345.496836066]: [registerPublisher] Failed to contact master at [localhost:11311]. Retrying... [ INFO] [1424393446.243884489]: Connected to master at [localhost:11311] parameter: base_name: kinect2 sensor: fps_limit: -1 calib_path: /home/parthmehrotra/catkin_ws/src/iai_kinect2/kinect2_bridge/data/ use_png: false jpeg_quality: 90 png_level: 1 depth_method: opengl depth_device: -1 reg_method: default reg_devive: -1 max_depth: 12 min_depth: 0.1 queue_size: 2 bilateral_filter: true edge_aware_filter: true publish_tf: false base_name_tf: kinect2 worker_threads: 4 [1] 2679 segmentation fault (core dumped) rosrun kinect2_bridge kinect2_bridge I must be overlooking something really trivial. Thanks for taking the time to help me.
I have implemented 2D-SLAM using EKF. The map is based-feature in which there is only one landmark for the sake of simplicity. I've read some papers regarding this matter. They plot the $\pm3\sigma$ plus the error. I would like to make sure that I'm doing the right thing. In my project, I have the estimate of the landmark's position and its true values. The true values here are the ones that the sensor measure not the ideal case. For example, the ideal case of the landmark position is (30,60) but this value is not accessible by any means, therefore I will consider the true values the ones that are coming from the sensor. Now the error in the landmark's position in x-axis is formulated as follows $$ \text{error}_{x} = \hat{x} - x $$ The below picture shows the error in blue color. The red color represents the error bounds which is $\pm 3 \sigma_{x}$ My question is now is this the way people plot the errors in the academics papers because I've seen some papers the bounds doesn't not look like mine. Even though mine decreases monotonically however in some papers it is more curved and it seems more reasonable to me. Any suggestions?
It is possible to distinguish the properties "time-varying" and "nonautonomous" in dynamical systems regarding Lyapunov stability analysis? Does it make a difference if the system depends explicitly on $t$ or indirectly on $t$ due to a time-varying parameter? I want to explain the problem in detail: Let a dynamical system denoted by $\dot x = f$, with state $x$. We say that a dynamical system is nonautonomous if the dynamics $f$ depend on time $t$, i.e. $$\dot x = f(t,x).$$ For instance the systems $$ \dot x = - t x^2 $$ and $$ \dot x = -a(t)x,$$ are nonautonomous. Let $a(t)$ be a bounded time-varying parameter, i.e. $||a(t)||<a^+$ and strictly positive, i.e. $a(t) > 0$. Particularly, the second example is more likely denoted as a time-varying linear system, but of course it is nonautonomous. In Lyapunov stability analysis autonomous and nonautonomous systems must be strongly distinguished to make assertions about stability of the system, and the Lyapunov analysis for nonautonomuos systems is much more difficult. And here for me some questions arise. When i want to analize stability of the second example must i really use the Lyapunov theory for nonautonomous systems? It follows for the candidate $V = 1/2 x^2$ $$\dot V = -a(t)x^2,$$ which is negative definite. Is the origin really asymptotically stable, as i suppose, or must i take the nonautomous characteristic into account in this case? I would suppose it makes a difference if a system depends explicitly on $t$ as in the first example or just indirect due to a time-varying parameter, since $t$ approaches infinity, but a parameter does not.
Is there a way to make a quadcopter maintain steady hovering (no lateral movement, constant altitude) while tilted left or right? If so, how can i accomplish this?
I want to build a automatic agricultural robot for my final year diploma project. The basic idea is to program 8051 to drive the robot in a fixed path in farm for ploughing the farm which i am planning to do by setting a particular distance till which it will go straight and then take a U turn and plough in next lane. Width of the farm will also be set so when it completes full farm it'll stop and go back to starting point. the only catch is to reprogram it as per size of the farm of the person who uses it. So i want to add a number pad with which he can set the length and width of the farm as well as width of each lane as per his needs without professional help. Can this be done using 8051 or should i go for AVR or PIC microcontrollers. I have just started studying programming and interfacing of 8051 so I am not that good in programming. If its possible how do i do it. can someone please help me with circuit diagram for this project. After everything i said i need in my project if i still get an empty port in microcontroller I would love to add a fertilizer sprayer or water irrigation system and a GSM module so that a farmer can simply ask the robot to start working using his mobile phone. As I am making just a prototype i want it to be as small as possible. Suggestions are welcomed.
Consider a tank like robot with a motor driver channel for each side of the robot (two motors on the left and two motors on the right) and an IMU. I'm interested in driving the robot in a straight line using the yaw data from the gyro and magnetometer of the IMU, removing the noise caused by slightly different behaving motors, and the possibility to change the desired direction angle. For example some event comes and I want the car to switch the desired direction to +120 degrees and turn while driving. I'm using Arduino Uno, MinIMU-9 v3 and two DRV8838 Single Brushed DC Motor Driver from Pololu. Can you please give me some hints and a short pseudo-code example? Thanks!
I'm a beginner in Robotics. I'd like to ask what are the minimum/recommended specs for a microcontroller to run a real-time system such as Linux RTAI? What is the popular microcontroller for running Linux RTAI? Thank you.
With the bee hive collapses, growers are desperate for pollenation options. Is anyone working on swarms of tiny flying robots to augment the bees? They could look for a certain color, poke around inside the flower for a moment, and move on to the next. When they need recharging, they fly back to their hive (the same reason bees fly back). Of course, replacing germinators that run the seeds through their digestive systems would be a different problem.
How a ROS node written in Python could subscribe to multiple topics and publish to multiple topics? All examples I found were for a single topic. Is this an event-driven model so subscription to multiple "events" is allowed or it is more like a loop, so it can listen only to one "source" at a time?
I have equations of a dynamic system. I need to figure out what this physical system is. The equations are: \begin{align} \dot{x}_1&=bx_1+kx_2+x_3\\ \dot{x}_2&=x_1\\ \dot{x}_3&=\alpha (u-x_2)-\beta x_3 \end{align} All I can figure out is that it is maybe a mass-spring-damper system, plus a feedback control, but I am not quite sure about the terms $x_3$ and $\dot{x}_3$. What do these two terms mean?
I was watching Sebastian Thrun's video course on AI for robotics (freely available on udacity.com). In his final chapter on GraphSLAM, he illustrates how to setup the system of equations for the mean path locations $x_i$ and landmark locations $L_j$. To setup the matrix system, he imposes each robot motion and landmark measurement constraint twice. For example, if a robot motion command is to move from x1 by 5 units to the right (reaching x2), i understand this constraint as $$-x_2+x_1= -5$$ However, he also imposes the negative of this equation $$x_2-x_1=5$$ as a constraint and superimposes it onto a different equation and i'm not sure why. In his video course, he briefly mentions that the matrix we're assembling is known as the "information matrix", but i have no why the information matrix is assembled in this specific way. So, I tried to read his book Probabilistic Robotics, and all i can gather is that these equations come from obtaining the minimizer of the negative log posterior probability incorporating the motion commands, measurements, and map correspondences, which results in a quadratic function of the unknown variables $L_j$ and $x_i$. Since it is quadratic (and the motion / measurement models are also linear), the minimum is obviously obtained by solving a linear system of equations. But why are each of the constraints imposed twice, with once as a positive quantity and again as the negative of the same equation? Its not immediately obvious to me from the form of the negative log posterior probability (i.e. the quadratic function) that the constraints must be imposed twice. Why is the "information matrix assembled this way? Does it also hold true when the motion and measurement models are nonlinear? Any help would be greatly appreciated.
I have been using the FreeIMU library successfully but now I want to add an external magnetometer that I can mount away from my motors. I've figured out how to modify the FreeIMU library to use an external magnetometer and I am getting data. What I can't figure out is what I need to change now that my magnetometer orientation has changed. On the free IMU it is mounted like this The external compass is mounted like this - upside down - rotated 180° around x I am changing the value inside the void HMC58X3::getRaw(int16_t *x,int16_t *y,int16_t *z) { *x = cache_x; *y = cache_y; *z = cache_z; } function as all the other code calls this to get the magnetometer data. So far I have tried: Changing the sign for the y and z values after I have got them from the magnetometer. Changing the sign for the y value only Changing the sign for the z value only Added 180 to both the y and z values Subtracting 180 from both the y and z values Subtracting 180 from y and adding 180 to Z Adding 180 to y and subtracting 180 from z Changing nothing The calibration GUI gives always gives me strange results, and doing the changes above just rotated/mirrored the magnetometer red and green graphs. I am unable to get rid of the key hole shape. The red is XY. The green YZ. The blue is ZX. Does the fact that ZX works mean that my issue is with the Y value? This is how it looks using the on board magnetometer. What should I try next? Thanks Joe EDIT I tried rotating the external magnetometer so it is in the same orientation as the FreeIMU magnetometer and I still get the same result so I don't think its the difference in orientation that is causing the problem. So then I thought maybe its because the FreeIMU is mounted central to the rotation axis and the external magnetometer is mounted about 20cm above. I tested this by rotating only the external magnetometer around itself and I still got the same result. This is all seems strange, do you think its possible that the external magnetometer I have brought is faulty? Any way to confirm it is working properly on its own? Thanks EDIT Managed to get circles plotting by changing the gain from 0 to 1. It seem my new magnetometer was being saturated. Now I just need to work out how to change my values around so the orientation is correct.
I'm a software researcher, who in my spare time mentors a robotics team, helping on the software side of things. For years, I keep coming back to the same question: How to determine the robots position, and heading during our competitions? We have tried a number of things with varying degrees of success/failure. Encoders on the drive wheels, accelerometers, gyroscopes, etc. I recently bought an IMU with a 3 axis accelerometer, 3 axis gyro, and 3 axis magnetometer, all preprocessed by an Arduino, and outputting the value to a serial port. I thought surely there must be a way to take all these measurements and get a composite view of position and heading. We are using Mecanum wheels on this particular robot, so wheel encoders are not particularly useful. I've looked around and there's a lot of talk about orientation using quaternion with sensor fusion using similar boards, but it very unclear to me how to take the quaternion and the estimation and come up with x,y distance from the starting position? Now, my time window for these measurements is small, ~15 seconds, but I need it to be pretty accurate within that window. I'm about ready to abandon using the IMU, and try something else. One idea is to use a USB ball mouse to try and track robot motion but I'm certain that the mouse is going to get banged around way too much leading to noise and invalid results. As a side note: The robot is about 2ft x 3ft base weighing in at 120 lbs. Any thoughts or suggestions appreciated.
I'm planning to build an omnidirectional platform that will support about 180kg robotic arm. The platform will be equipped in meccanum wheels. I would like to have some kind of suspension to avoid wheels loosing contact with the floor on small bumps (let's say 2cm). The first suspension type I thought about was rocker bogie type, but I'm afraid, that changes of arm center of mass during its movement will introduce too much stress on rocker bogie mechanism. What other choices would you recommend? Or maybe rocker bogie will be fine after all?
I've a 2D sensor which provides a range $r$ and a bearing $\phi$ to a landmark. In my 2D EKF-SLAM simulation, the sensor has the following specifications $$ \sigma_{r} = 0.01 \text{m} \ \ ,\sigma_{\phi} = 0.5 \ \text{deg} $$ The location of the landmark in x-axis is 30. EKF imposes the Gaussian noise, therefore the location of the landmark is represented via two quantities namely the mean $\mu_{x}$ and the variance $\sigma_{x}$. In the following graph The green is the mean $\mu_{x}$ which is very close to the true location (i.e. 30). The black is the measurements and red is $\mu_{x} \pm 3 \sigma_{x}$. I don't understand why the uncertainty is big while I'm using rather accurate sensor. The process noise for the robot's pose is $\sigma_{v} = 0.001$ which is small noise. I'm using C++. Edit: for people who ask about the measurements, this is my code $$ r = \sqrt{ (m_{j,y} - y)^{2} + (m_{j,x} - x)^{2}} + \mathcal{N}(0, \sigma_{r}^{2}) \\ \phi = \text{atan2} \left( \frac{m_{j,y} - y}{m_{j,x} - x} \right) + \mathcal{N}(0, \sigma_{\phi}^{2}) $$ std::vector<double> Robot::observe( const std::vector<Beacon>& map ) { std::vector<double> Zobs; for (unsigned int i(0); i < map.size(); ++i) { double range, bearing; range = sqrt( pow(map[i].getX() - x,2) + pow(map[i].getY() - y,2) ); // add noise to range range += sigma_r*Normalized_Gaussain_Noise_Generator(); bearing = atan2( map[i].getY() - y, map[i].getX() - x) - a; // add noise to bearing bearing += sigma_p*Normalized_Gaussain_Noise_Generator(); bearing = this->wrapAngle(bearing); if ( range < 1000 ){ // store measurements (range, angle) for each landmark. Zobs.push_back(range); Zobs.push_back(bearing); //std::cout << range << " " << bearing << std::endl; } } return Zobs; } where Normalized_Gaussain_Noise_Generator() is ( i.e. $\mathcal{N}(0, 1) )$ double Robot::Normalized_Gaussain_Noise_Generator() { double noise; std::normal_distribution<double> distribution; noise = distribution(generator); return noise; } For the measurements (i.e. the black color), I'm using the inverse measurement function given the estimate of the robot's pose and the true measurement in polar coordinates to get the location of a landmark. The actual approach is as follows $$ \bar{\mu}_{j,x} = \bar{\mu}_{x} + r \cos(\phi + \bar{\mu}_{\theta}) \\ \bar{\mu}_{j,y} = \bar{\mu}_{y} + r \sin(\phi + \bar{\mu}_{\theta}) $$ This is how it is stated in the Probabilistic Robotics book. This means that the measurements in the above graph are indeed the predicted measurements not the true ones. Now under same conditions, the true measurements can be obtained as follows $$ \text{m}_{j,x} = x + r \cos(\phi + \theta) \\ \text{m}_{j,y} = y + r \sin(\phi + \theta) $$ The result is in the graph below, which means there is no correlations between the true measurements and the robot's estimate. This leads me to the same question - why the uncertainty behaves like that?
Most academic papers characterise the rate of rotation along the x axis as φ"=(1/Jx)τφ. As far as I cant tell, this characterises the rate and not the actual angle φ itself and yet the PID controllers academics use to control this takes φsetpoint-φmeasured as its error signal. Should the error signal not be φ"setpoint-φ"measured (using gyro values) instead. Why are they using the euler angle instead of its second derivative to control the rate? Is it possible to stabilise a quadcopter using euler angles only?
I salvaged some parts off my dead Roomba 650, and I'm trying to use the drive motor assembly . I got the pinout of the connector but I don't know what voltage / PWM / other specifications are there for this motor. I've attached the picture of the drive motor assembly. Any help would be appreciated! Thank you, Pratik Edit: The Image is here: Drive Motor Module
Can anyone help me, because I am doing a project on robotical surgeries and I would like someone to help me and advise me. I wonder if anyone could give me some data on tests he or she has run in a surgical robot... Thank you for your attention! Anything else will be much appreciated!
What are the major differences between motion planning and trajectory generation in robotics? Can the terms be used interchangeably?
I have the create 2 and have it hooked up to an arduino. Almost all the commands work fine except when retrieving sensor information. If i send a request for packet 18 I get back values that while consistent don't match up, unless I am missing something. So if I press the clean button I get 127 or 11111110 and if i then press spot I get something like 11111010. I might be messing up my endianness but regardless the data isnt formatted how I expected it to be according to the spec sheet. I have 3 create 2s and they all do the same thing. Any ideas? I am using a 2n7000 along with the tutorial from the site but i dont think that has anything to do with the formatting of the byte. this is the library I am using: https://github.com/DomAmato/Create2 Sorry to take so long to get back on this, anyways the data we get is always formatted this way. It is not a baud rate issue since it understands the commands properly. day hour minute schedule clock dock spot clean day 3 x x x x x x x hour 6 7 x x x x x x minute 13 14 15 x x x x x schedule x x x x x x x x clock x x x x x x x x dock 27 29 30 x x 31 x x spot 55 59 61 x x 62 63 x clean 111 119 123 x x 125 126 127 Note that the schedule and clock buttons return nothing
The links twist could be obtained, and thus The spatial manipulator Jacobian could be done, but when it comes to the body Jacobian, it is becomes difficult. Moreover, the adjoint transformation relates both Jacobain, but however that is 4*4 while the Jacobian is 6*n; how does it works? as in the picture, he is getting a body jacobian for each link, not one jacobian matrix for the whole robot, I don't know. Any help is highly regarded. Like this example or here for full details
Currently I am building a omnidirectional robot with 4 DC Motors with embedded incremental encoder. However, with a constant pwm input, i am not able to control the motor to rotate in a "relatively stable" state, refer to the figure, it can be observed that the linear speed of the motors can varied in 10cm/s range. I believe one possible reason is the PWM signal generated from my Arduino Mega Controller is not good enough. And my problem is how can I implement a stable PID controller in this case? As the speed of the motor varies even with the same input, I believe extra work like adding a filter is needed? Any advice is appreciated >.< Thank you
How do you stream video feed from a camera on a drone? I would think that at high altitudes Wi-Fi won't work. So what would you usually do, and how?
How do you select the following two angles in the design of a Rocker bogie system: Angle between two arms of the main rocker, and; Angle between two arms in the bogie.
I'm looking for ways to detect human presense behind walls in close proximity (around 10 feet) in whatever way possible! Problem is I can't code! (I hope it's ok I'm posting here.) I know there are different sensors but they all seem to be for detecting by motion of target humans. How do you detect still persons? Is there a sound amplification device that magnifies human breathing x 20? Or detect body heat? Or pick up radiation waves or something off humans?
I'm constructing a 2 wheels balancing robot which uses a PID controller. I've tuned my parameters on numerical simulations based on a continuous inverted pendulum system so that the simulated inverted pendulum balances by controlling the horizontal (linear) cart acceleration $\ddot{x}$. Now that I've done this, I want to take the next step and turn my PID control commands into electrical commands onto a DC motor to give the desired linear acceleration $\ddot{x}$. However I'm not sure how exactly to do this for my specific robot's motors. Are there experimental tests should I run to determine how to convert PID commands into DC motor acceleration commands? Or is there a formula to do this based on the motor's specifications? Update The non-linear dynamic equation I'm using is $$L\ddot{\theta}=gsin(\theta)+\ddot{x}(t)cos(\theta)+Ld(t)$$ where $\ddot{x}(t)$ is the linear acceleration, $g$ is the acceleration due to gravity, and $\ddot{\theta}$ is the angular acceleration, and $d(t)$ is an external disturbance to the system. To simplify things, I've linearized the equations around $\theta\approx0$, yielding $$L\ddot{\theta}=g\theta+\ddot{x}(t)+Ld(t)$$ I've assumed that the only control input is the cart's linear acceleration $\ddot{x}(t)$, and chose this control command as $\ddot{x}(t)=K_1\theta(t) + K_2\int_0^t\theta(t) dt + K_3\dot{\theta}$, where $K_i$ are the PID gains.
I need a computer on the board like raspberry pi for vending machine (I want to replace the original controller). This is list of some requirements: 1) It should have pins to connect to the mdb protocol & other stuff through gpio. 2) Good performance. There will be a display with browser showing rails application running. I've tried a raspberry pi B+, but it's too slow (it can't even run a browser with speed like a laptop pc). So, I want to choose a more powerful system like odroid, wandboard etc. 3) Custom video output. Sometimes I need to display FullHD(1920x1080), sometimes I need to show at 768x1024 (yes, the computer should simply rotate video output) 4) I don't want to connect microcomputer to display directly, not through HDMI, DVI or something like this. (This is not required, but very desirable). Please help me choose. Nowadays I try to choose from odroid, wandboard or pandaboard. Are there any other computers? What version of the computer is advisable?
There is always a way to do this using arduino, rasberry pi etc. However, in many cases in discussion in forums i've come across things where whole 'logic' can be uploaded to $0.50 chip. Instead of $50 dollar part. DRASTIC change. This defines a line between a one time thing that you made as a hobby, and something you can sell around. So basically if i want led to get brightest at loud sound and almost off on silence. Or with button that switch to 100% on all the time.
I am a total newbie in robotics so please bare with me. I have a school project where my team has to design a robot that is capable of picking up 3 golf balls in different sizes at predefined locations. Then it has to drop these balls into their respective holes. We are using an arduino chip in our robot. I thought I could perhaps define a path for the robot, an invisible virtual path you may call. So imagining the platform as Cartesian plane, can I tell the robot go to where I want it to go? For example, go to (5,12) Or do I need some sort of sensors so the robot figures it out by itself. Thanks for your time!
I am attempting to build a Raspberry Pi based quadcopter. So far I have succeeded in interfacing with all the hardware, and I have written a PID controller that is fairly stable at low throttle. The problem is that at higher throttle the quadcopter starts thrashing and jerking. I have not even been able to get it off the ground yet, all my testing has been done on a test bench. I have ruled out bad sensors by testing each sensor individually, and they seem to be giving correct values. Is this a problem with my code, or perhaps a bad motor? Any suggestions are greatly appreciated. Here is my code so far: QuadServer.java: package com.zachary.quadserver; import java.net.*; import java.io.*; import java.util.*; import se.hirt.pi.adafruit.pwm.PWMDevice; import se.hirt.pi.adafruit.pwm.PWMDevice.PWMChannel; public class QuadServer { private static Sensor sensor = new Sensor(); private final static int FREQUENCY = 490; private static double PX = 0; private static double PY = 0; private static double IX = 0; private static double IY = 0; private static double DX = 0; private static double DY = 0; private static double kP = 1.3; private static double kI = 2; private static double kD = 0; private static long time = System.currentTimeMillis(); private static double last_errorX = 0; private static double last_errorY = 0; private static double outputX; private static double outputY; private static int val[] = new int[4]; private static int throttle; static double setpointX = 0; static double setpointY = 0; static long receivedTime = System.currentTimeMillis(); public static void main(String[] args) throws IOException, NullPointerException { PWMDevice device = new PWMDevice(); device.setPWMFreqency(FREQUENCY); PWMChannel BR = device.getChannel(12); PWMChannel TR = device.getChannel(13); PWMChannel TL = device.getChannel(14); PWMChannel BL = device.getChannel(15); DatagramSocket serverSocket = new DatagramSocket(8080); Thread read = new Thread(){ public void run(){ while(true) { try { byte receiveData[] = new byte[1024]; DatagramPacket receivePacket = new DatagramPacket(receiveData, receiveData.length); serverSocket.receive(receivePacket); String message = new String(receivePacket.getData()); throttle = (int)(Integer.parseInt((message.split("\\s+")[4]))*12.96)+733; setpointX = Integer.parseInt((message.split("\\s+")[3]))-50; setpointY = Integer.parseInt((message.split("\\s+")[3]))-50; receivedTime = System.currentTimeMillis(); } catch (IOException e) { e.printStackTrace(); } } } }; read.start(); while(true) { Arrays.fill(val, calculatePulseWidth((double)throttle/1000, FREQUENCY)); double errorX = -sensor.readGyro(0)-setpointX; double errorY = sensor.readGyro(1)-setpointY; double dt = (double)(System.currentTimeMillis()-time)/1000; double accelX = sensor.readAccel(0); double accelY = sensor.readAccel(1); double accelZ = sensor.readAccel(2); double hypotX = Math.sqrt(Math.pow(accelX, 2)+Math.pow(accelZ, 2)); double hypotY = Math.sqrt(Math.pow(accelY, 2)+Math.pow(accelZ, 2)); double accelAngleX = Math.toDegrees(Math.asin(accelY/hypotY)); double accelAngleY = Math.toDegrees(Math.asin(accelX/hypotX)); if(dt > 0.01) { PX = errorX; PY = errorY; IX += errorX*dt; IY += errorY*dt; IX = 0.95*IX+0.05*accelAngleX; IY = 0.95*IY+0.05*accelAngleY; DX = (errorX - last_errorX)/dt; DY = (errorY - last_errorY)/dt; outputX = kP*PX+kI*IX+kD*DX; outputY = kP*PY+kI*IY+kD*DY; time = System.currentTimeMillis(); } System.out.println(setpointX); add(-outputX+outputY, 0); add(-outputX-outputY, 1); add(outputX-outputY, 2); add(outputX+outputY, 3); //System.out.println(val[0]+", "+val[1]+", "+val[2]+", "+val[3]); if(System.currentTimeMillis()-receivedTime < 1000) { BR.setPWM(0, val[0]); TR.setPWM(0, val[1]); TL.setPWM(0, val[2]); BL.setPWM(0, val[3]); } else { BR.setPWM(0, 1471); TR.setPWM(0, 1471); TL.setPWM(0, 1471); BL.setPWM(0, 1471); } } } private static void add(double value, int i) { value = calculatePulseWidth(value/1000, FREQUENCY); if(val[i]+value > 1471 && val[i]+value < 4071) { val[i] += value; }else if(val[i]+value < 1471) { //System.out.println("low"); val[i] = 1471; }else if(val[i]+value > 4071) { //System.out.println("low"); val[i] = 4071; } } private static int calculatePulseWidth(double millis, int frequency) { return (int) (Math.round(4096 * millis * frequency/1000)); } } Sensor.java: package com.zachary.quadserver; import com.pi4j.io.gpio.GpioController; import com.pi4j.io.gpio.GpioFactory; import com.pi4j.io.gpio.GpioPinDigitalOutput; import com.pi4j.io.gpio.PinState; import com.pi4j.io.gpio.RaspiPin; import com.pi4j.io.i2c.*; import com.pi4j.io.gpio.GpioController; import com.pi4j.io.gpio.GpioFactory; import com.pi4j.io.gpio.GpioPinDigitalOutput; import com.pi4j.io.gpio.PinState; import com.pi4j.io.gpio.RaspiPin; import com.pi4j.io.i2c.*; import java.net.*; import java.io.*; public class Sensor { static I2CDevice sensor; static I2CBus bus; static byte[] accelData, gyroData; static long accelCalib[] = new long[3]; static long gyroCalib[] = new long[3]; static double gyroX = 0; static double gyroY = 0; static double gyroZ = 0; static double accelX; static double accelY; static double accelZ; static double angleX; static double angleY; static double angleZ; public Sensor() { //System.out.println("Hello, Raspberry Pi!"); try { bus = I2CFactory.getInstance(I2CBus.BUS_1); sensor = bus.getDevice(0x68); sensor.write(0x6B, (byte) 0x0); sensor.write(0x6C, (byte) 0x0); System.out.println("Calibrating..."); calibrate(); Thread sensors = new Thread(){ public void run(){ try { readSensors(); } catch (IOException e) { System.out.println(e.getMessage()); } } }; sensors.start(); } catch (IOException e) { System.out.println(e.getMessage()); } } private static void readSensors() throws IOException { long time = System.currentTimeMillis(); long sendTime = System.currentTimeMillis(); while (true) { accelData = new byte[6]; gyroData = new byte[6]; int r = sensor.read(0x3B, accelData, 0, 6); accelX = (((accelData[0] << 8)+accelData[1]-accelCalib[0])/16384.0)*9.8; accelY = (((accelData[2] << 8)+accelData[3]-accelCalib[1])/16384.0)*9.8; accelZ = ((((accelData[4] << 8)+accelData[5]-accelCalib[2])/16384.0)*9.8)+9.8; accelZ = 9.8-Math.abs(accelZ-9.8); double hypotX = Math.sqrt(Math.pow(accelX, 2)+Math.pow(accelZ, 2)); double hypotY = Math.sqrt(Math.pow(accelY, 2)+Math.pow(accelZ, 2)); double accelAngleX = Math.toDegrees(Math.asin(accelY/hypotY)); double accelAngleY = Math.toDegrees(Math.asin(accelX/hypotX)); //System.out.println((int)gyroX+", "+(int)gyroY); //System.out.println("accelX: " + accelX+" accelY: " + accelY+" accelZ: " + accelZ); r = sensor.read(0x43, gyroData, 0, 6); if(System.currentTimeMillis()-time >= 5) { gyroX = (((gyroData[0] << 8)+gyroData[1]-gyroCalib[0])/131.0); gyroY = (((gyroData[2] << 8)+gyroData[3]-gyroCalib[1])/131.0); gyroZ = (((gyroData[4] << 8)+gyroData[5]-gyroCalib[2])/131.0); angleX += gyroX*(System.currentTimeMillis()-time)/1000; angleY += gyroY*(System.currentTimeMillis()-time)/1000; angleZ += gyroZ; angleX = 0.95*angleX + 0.05*accelAngleX; angleY = 0.95*angleY + 0.05*accelAngleY; time = System.currentTimeMillis(); } //System.out.println((int)angleX+", "+(int)angleY); //System.out.println((int)accelAngleX+", "+(int)accelAngleY); } } public static void calibrate() throws IOException { int i; for(i = 0; i < 3000; i++) { accelData = new byte[6]; gyroData = new byte[6]; int r = sensor.read(0x3B, accelData, 0, 6); accelCalib[0] += (accelData[0] << 8)+accelData[1]; accelCalib[1] += (accelData[2] << 8)+accelData[3]; accelCalib[2] += (accelData[4] << 8)+accelData[5]; r = sensor.read(0x43, gyroData, 0, 6); gyroCalib[0] += (gyroData[0] << 8)+gyroData[1]; gyroCalib[1] += (gyroData[2] << 8)+gyroData[3]; gyroCalib[2] += (gyroData[4] << 8)+gyroData[5]; try { Thread.sleep(1); } catch (Exception e){ e.printStackTrace(); } } gyroCalib[0] /= i; gyroCalib[1] /= i; gyroCalib[2] /= i; accelCalib[0] /= i; accelCalib[1] /= i; accelCalib[2] /= i; System.out.println(gyroCalib[0]+", "+gyroCalib[1]+", "+gyroCalib[2]); } public double readAngle(int axis) { switch (axis) { case 0: return angleX; case 1: return angleY; case 2: return angleZ; } return 0; } public double readGyro(int axis) { switch (axis) { case 0: return gyroX; case 1: return gyroY; case 2: return gyroZ; } return 0; } public double readAccel(int axis) { switch (axis) { case 0: return accelX; case 1: return accelY; case 2: return accelZ; } return 0; } } Edit: I have re-written my code in C++ to see if it will run faster but it's still running at about the same speed(about 15 ms per cycle or about 66 Hz). This is my new code in C++: #include <wiringPi.h> #include <wiringPiI2C.h> #include <sys/socket.h> #include <netinet/in.h> #include <string.h> #include <string> #include <iostream> #include <unistd.h> #include <boost/thread.hpp> #include <time.h> #include <cmath> #define axisX 0 #define axisY 1 #define axisZ 2 #define kP 20 #define kI 0 #define kD 0 #define FREQUENCY 490 #define MODE1 0x00 #define MODE2 0x01 #define SUBADR1 0x02 #define SUBADR2 0x03 #define SUBADR13 0x04 #define PRESCALE 0xFE #define LED0_ON_L 0x06 #define LED0_ON_H 0x07 #define LED0_OFF_L 0x08 #define LED0_OFF_H 0x09 #define ALL_LED_ON_L 0xFA #define ALL_LED_ON_H 0xFB #define ALL_LED_OFF_L 0xFC #define ALL_LED_OFF_H 0xFD // Bits #define RESTART 0x80 #define SLEEP 0x10 #define ALLCALL 0x01 #define INVRT 0x10 #define OUTDRV 0x04 #define BILLION 1000000000L using namespace std; double accelCalX = 0; double accelCalY = 0; double accelCalZ = 0; double gyroCalX = 0; double gyroCalY = 0; double gyroCalZ = 0; double PX; double PY; double IX = 0; double IY = 0; double DX; double DY; double lastErrorX; double lastErrorY; int throttle = 1471; int sensor = wiringPiI2CSetup(0x68); int pwm = wiringPiI2CSetup(0x40); array<int,4> motorVal; struct timespec now, then; int toSigned(int unsignedVal) { int signedVal = unsignedVal; if(unsignedVal > 32768) { signedVal = -(32768-(unsignedVal-32768)); } return signedVal; } double getAccel(int axis) { double X = (toSigned((wiringPiI2CReadReg8(sensor, 0x3B) << 8)+wiringPiI2CReadReg8(sensor, 0x3C)))/1671.8; double Y = (toSigned((wiringPiI2CReadReg8(sensor, 0x3D) << 8)+wiringPiI2CReadReg8(sensor, 0x3E)))/1671.8; double Z = (toSigned((wiringPiI2CReadReg8(sensor, 0x3F) << 8)+wiringPiI2CReadReg8(sensor, 0x40)))/1671.8; X -= accelCalX; Y -= accelCalY; Z -= accelCalZ; Z = 9.8-abs(Z-9.8); switch(axis) { case axisX: return X; case axisY: return Y; case axisZ: return Z; } } double getGyro(int axis) { double X = (toSigned((wiringPiI2CReadReg8(sensor, 0x43) << 8)+wiringPiI2CReadReg8(sensor, 0x44)))/1671.8; double Y = (toSigned((wiringPiI2CReadReg8(sensor, 0x45) << 8)+wiringPiI2CReadReg8(sensor, 0x46)))/1671.8; double Z = (toSigned((wiringPiI2CReadReg8(sensor, 0x47) << 8)+wiringPiI2CReadReg8(sensor, 0x48)))/1671.8; X -= gyroCalX; Y -= gyroCalY; Z -= gyroCalZ; switch(axis) { case axisX: return X; case axisY: return Y; case axisZ: return Z; } } void calibrate() { int i; for(i = 0; i < 1500; i++) { accelCalX += (toSigned((wiringPiI2CReadReg8(sensor, 0x3B) << 8)+wiringPiI2CReadReg8(sensor, 0x3C)))/1671.8; accelCalY += (toSigned((wiringPiI2CReadReg8(sensor, 0x3D) << 8)+wiringPiI2CReadReg8(sensor, 0x3E)))/1671.8; accelCalZ += (toSigned((wiringPiI2CReadReg8(sensor, 0x3F) << 8)+wiringPiI2CReadReg8(sensor, 0x40)))/1671.8; gyroCalX += (toSigned((wiringPiI2CReadReg8(sensor, 0x43) << 8)+wiringPiI2CReadReg8(sensor, 0x44)))/1671.8; gyroCalX += (toSigned((wiringPiI2CReadReg8(sensor, 0x45) << 8)+wiringPiI2CReadReg8(sensor, 0x46)))/1671.8; gyroCalX += (toSigned((wiringPiI2CReadReg8(sensor, 0x45) << 8)+wiringPiI2CReadReg8(sensor, 0x46)))/1671.8; usleep(1000); } accelCalX /= i; accelCalY /= i; accelCalZ /= i; accelCalZ -= 9.8; gyroCalX /= i; gyroCalY /= i; gyroCalZ /= i; cout << accelCalX << " " << accelCalY << " " << accelCalZ << "\n"; } int calculatePulseWidth(double millis, int frequency) { return (int)(floor(4096 * millis * frequency/1000)); } void add(double value, int i) { value = calculatePulseWidth(value/1000, FREQUENCY); if(motorVal[i]+value > 1471 && motorVal[i]+value < 4071) { motorVal[i] += value; }else if(motorVal[i]+value < 1471) { //System.out.println("low"); motorVal[i] = 1471; }else if(motorVal[i]+value > 4071) { //System.out.println("low"); motorVal[i] = 4071; } } void getThrottle() { int sockfd,n; struct sockaddr_in servaddr,cliaddr; socklen_t len; char mesg[1000]; sockfd=socket(AF_INET,SOCK_DGRAM,0); bzero(&servaddr,sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(8080); bind(sockfd,(struct sockaddr *)&servaddr,sizeof(servaddr)); while(true) { len = sizeof(cliaddr); n = recvfrom(sockfd,mesg,1000,0,(struct sockaddr *)&cliaddr,&len); mesg[n] = 0; string message(mesg); string values[5]; int valIndex = 0; int lastIndex = 0; for(int i = 0; i < message.length(); i++) { if(message[i] == ' ') { values[valIndex] = message.substr(lastIndex+1, i); lastIndex = i; valIndex++; } } values[valIndex] = message.substr(lastIndex+1, message.length()); throttle = calculatePulseWidth(((stoi(values[4])*12.96)+733)/1000, FREQUENCY); } } void setAllPWM(int on, int off) { wiringPiI2CWriteReg8(pwm, ALL_LED_ON_L, (on & 0xFF)); wiringPiI2CWriteReg8(pwm, ALL_LED_ON_H, (on >> 8)); wiringPiI2CWriteReg8(pwm, ALL_LED_OFF_L, (off & 0xFF)); wiringPiI2CWriteReg8(pwm, ALL_LED_OFF_H, (off >> 8)); } void setPWM(int on, int off, int channel) { wiringPiI2CWriteReg8(pwm, LED0_ON_L + 4 * channel, (on & 0xFF)); wiringPiI2CWriteReg8(pwm, LED0_ON_H + 4 * channel, (on >> 8)); wiringPiI2CWriteReg8(pwm, LED0_OFF_L + 4 * channel, (off & 0xFF)); wiringPiI2CWriteReg8(pwm, LED0_OFF_H + 4 * channel, (off >> 8)); } void setPWMFrequency(double frequency) { double prescaleval = 25000000.0; prescaleval /= 4096.0; prescaleval /= frequency; prescaleval -= 1.0; double prescale = floor(prescaleval + 0.5); int oldmode = wiringPiI2CReadReg8(pwm, MODE1); int newmode = (oldmode & 0x7F) | 0x10; wiringPiI2CWriteReg8(pwm, MODE1, newmode); wiringPiI2CWriteReg8(pwm, PRESCALE, (floor(prescale))); wiringPiI2CWriteReg8(pwm, MODE1, oldmode); usleep(50000); wiringPiI2CWriteReg8(pwm, MODE1, (oldmode | 0x80)); } void initSensor() { wiringPiI2CWriteReg8(sensor, 0x6B, 0x0); wiringPiI2CWriteReg8(sensor, 0x6C, 0x0); } void initPWM() { setAllPWM(0, 0); wiringPiI2CWriteReg8(pwm, MODE2, OUTDRV); wiringPiI2CWriteReg8(pwm, MODE1, ALLCALL); usleep(50000); int mode1 = wiringPiI2CReadReg8(pwm, MODE1); mode1 = mode1 & ~SLEEP; wiringPiI2CWriteReg8(pwm, MODE1, mode1); usleep(50000); setPWMFrequency(FREQUENCY); } double millis(timespec time) { return (time.tv_sec*1000)+(time.tv_nsec/1.0e6); } double intpow( double base, int exponent ) { int i; double out = base; for( i=1 ; i < exponent ; i++ ) { out *= base; } return out; } int main (void) { initSensor(); initPWM(); cout << "Calibrating..." << "\n"; calibrate(); boost::thread server(getThrottle); clock_gettime(CLOCK_MONOTONIC, &then); while(true) { motorVal.fill(throttle); clock_gettime(CLOCK_MONOTONIC, &now); double dt = (millis(now)-millis(then))/1000; then = now; double accelX = getAccel(0); double accelY = getAccel(1); double accelZ = getAccel(2); double hypotX = sqrt(intpow(accelX, 2)+intpow(accelZ, 2)); double hypotY = sqrt(intpow(accelY, 2)+intpow(accelZ, 2)); double accelAngleX = (180/3.14)*(asin(accelY/hypotY)); double accelAngleY = (180/3.14)*(asin(accelX/hypotX)); double errorX = -getGyro(0); double errorY = getGyro(1); PX = errorX; PY = errorY; IX += errorX*dt; IY += errorY*dt; IX = 0.95*IX+0.05*accelAngleX; IY = 0.95*IY+0.05*accelAngleY; DX = (errorX-lastErrorX)*dt; DY = (errorY-lastErrorY)*dt; lastErrorX = errorX; lastErrorY = errorY; double outputX = kP*PX+kI*IX+kD*DX; double outputY = kP*PY+kI*IY+kD*DY; add(outputY, 0);//-outputX+ add(outputY, 1);//-outputX- add(outputY, 2);//outputX- add(outputY, 3);//outputX+ setPWM(0, motorVal[0], 12); setPWM(0, motorVal[1], 13); setPWM(0, motorVal[2], 14); setPWM(0, motorVal[3], 15); } } In addition two of the motors seem like they are lagging when I turn the quadcopter fast in one direction. Also for some strange reason the quadcopter seems less responsive to P gain; I have it at 20 in the C++ version and it is working about the same as when I had it at 1.5 in the java version. Edit: After doing some more testing I have determined that reading from the MPU6050 and writing to the PCA9685 board that I am using to control the ESCs is the source of the delay. Does anybody know how to speed this up? Edit: I managed to speed up my code to about 200 Hz by changing the i2c baud rate, but the quadcopter is still thrashing. I have spent hours trying to tune the pid controller, but it doesn't seem to help at all.
I and my team have to design a robot using an arduino chip. The objective of the robot is to grab golf balls at a set of golf pins at different heights and pre-defined locations. We couldn't figure out a possible mechanism that could collect the balls and drop them into the trailer except for a robot arm. However, we don't have experience and time in designing a sophisticated system for the arm like recognizing where the ball is and then grabbing it accordingly. What would a feasible option be compared to a non-sophisticated robot arm? Note:The robot must be autonomous.
I'm developing a stabilisation system for an 'off-the-shelf' quadcopter by using an Arduino Mega and an IMU. The IMU is reading the angle of the quad, calculating motor commands by using a PID controller and applying them to the motors. It works well when constrained in a test bed, however in reality, although the quad is straight and level, it's drifting to one side because of its recent motor commands correcting the pitch/yaw. Is there any way I can (without using a vision system) keep the quad in one place without drifting? I've looked into obtaining velocity by integrating the acceleration value, however it's extremely noisy and doesn't give a meaningful reading.
We have an epilog laser cutter around here and I was wondering if it would possibly work as a base for a 3d printer? Here is a Dropbox photo album of the laser cutter. I am thinking I will have to get a new control system but I am unsure if I will be able to use the motor controllers or if they are embedded in the current control's board. I am also unsure if it has fine enough control on the z axis but if not that can be modified. What would be a good head to look at? Any other thoughts?
I'm trying to understand how to obtain the Kp, Ki, Kd values after finding a combination of K and a that works for me. Do I just expand the equation and take the coefficients?
I am a newbie in Robotics. As far as I know, there're generally two ways to acquire depth information of a scene. One is the stereo vision method which applies two cameras. The other is the RGBD sensors such as Kinect and Primesense. It seems that both methods are in use currently. However, I do not know what are their advantages against each other. I think the Kinect is a perfect solution over stereo vision ignoring its expense. So I have two questions here: Is there advantages of binocular methods over Kinect besides expense? As I know, both the two methods are confined with a limited distance detection range. In real world applications, we sometimes also need depth data at distance. Is there a method that we can acquire or estimate data information at far distance( not considering laser-based detectors ). Further more, my application may be a small flight vehicle. Which method and equipment should I choose? Will the traditional binocular camera be too slow for my application?
I'm building a submersible ROV, so I need a way to navigate. So using a compass would help but this brings up the question, does an electronic compass work underwater? My thoughts are the water might act as a faraday cage, and therefore interfere with the magnetic field. Therefore it might not even work. Maybe a gyroscope might be a better solution.
I'm working on the dynamics model of a RRRR articulated robot, I'm following Euler-Lagrange approach and developing my code in m-file in matlab; I'm looking for dynamic model of this form: $$ D(q) \ddot{q} + C(q,\dot{q})\dot{q}+ g(q) = \tau $$ where $D$ and $C$ are $4 \times4$ matrices and $g$ and $\tau$ (torque) are $4\times1$ vectors; by formulating the kinetic and potential energies;; The problem is that, I'm getting very long equations, and the term in $D$ matrix are very huge and nonlinear, involving sin and cos; I'm talking about a several pages per equation; After I published the code - 7 pages - and the output I got around 45 pages in total; I searched around there was some guy he faced the same problem before, but there was no helpful proposal. Any Suggestions ??
I'm currently calibrating the MPU6050 chip using an arduino mega 2560. I am using the J Rowberg 12c dev libraries. I can get it to print raw accelerometer and gyroscpe values (very unstable, wildly changing values). In the digital motion processing chip library, I can get it to print euler angles, quaternions, real world acceleration and actual acceleration but there is no option to get gyroscope data. can I use the DMP library to get gyro data or is it only possible to get raw unprocessed gyro values?
my question is general so please bear with me. I'm now interested in buying a quadcopter and develop some functions that it does for example an android app to control it, or objects detection. So my question is what are the available quadcopters which has a software that allows me to do such things not just a flying toy? P.S: I'm asked to buy a kit within 600$ and not build it by myself
I have to measure the frequency of a little circle in rotation. You can image this circle flying in air, because this circle can't touch anything that is not in rotation. So I can't use some simple trick to count the number of complete rotation in an amount of time. So I supposed my only chance was to use an accelerometer, a gyroscope, or a magnetometer. The accelerometer can detect the centripetal acceleration, and the gyroscope and the magnetometer with some calculus directly the frequency. The problem is the high frequency of this circle (can reach up to 50 Hz). Doing some simple calculus we know we need a gyroscope that can measure big angular velocity: 50*360°/s = 18.000°/s. Also the accelerometer need big range of values (the radius of the circle is only 5 cm): w (angular velocity) = (2 * 3,14) / (1/50) = 314 rad/sec acc_centr = w^2 * R = 98596 * 0,05 ~ 5000m/s^2 ~ 500g Now I have seen there are some accelerometer or gyroscope for industrial purposes with enough range, but my question is: How I can understand if a magnetometer can used is this kind of application? Considering there is no magnetic field near the circle, can a magnetometer be used to measure quickly change in inclination? In the datasheet i can read how ofter the sensor can communicate with my arduino, but nothing about how quick the rotation can be. Is the reason that a magnetometer don't have the limits of a gyroscope or an accelerometer?
I am currently doing a project for school and we are told that we must use a micro controller that ends up controlling some external hardware, now i know the crazyflie is controlling the motors which counts as external hardware but is it a micro controller? My second question is i want to purchase the kit so i can assembly it myself however I saw that you can use an expansion board so you need not solder and also i plan on not buying a remote its possible to control the crazyflie via my iPhone correct? I would appreciate it if someone could answer my questions. Thank you in advance
I'm very passionate about robots from my childhood.I'm a java developer. I love sci-fi movies.I have a little bit knowledge in embedded systems and electronics. My ambition is to build a robot like Jarvis (In Iron Man Movie).Which is a voice controlled robot.I would like to implement that in my house as a home automation system.It would take voice as input and take appropriate action.. Please help me to do this. Any kind of help is appreciated..
I'm trying to send some commands to the Roomba. However is behaving strange. This is the manual that I'm using. http://www.irobot.com/~/media/MainSite/PDFs/About/STEM/Create/create_2_Open_Interface_Spec.pdf First of all. I have consulted several manuals, some of them say that the default baudrate is 115200, however it works for me at 57200. I'm trying to get a response from the Roomba sending the following comand Examples: • To turn on iRobot Create’s Play LED only: 128 132 139 2 0 0 However, the Roomba goes crazy and start going around. Any idea what's happening or what I'm not doing? Or what should I do first? Thank you.
Look at this robot here http://www.meccanotec.com/step781b.JPG I can see rods which have a lot of holes and planes which also have holes. This seems to be a way to create flexibility in how the things are connected together to create the final robot. Is there a name for this type of equipment, metals with holes. Where can I get it? I am aware of people using lego blocks to create robots, but am not sure about what these metal rods and plates with holes are. Is there a free application in which I can design a mechanical structure like the one in the image and add gears and then simulate it to see how it will rotate and bend should a real robot like that be created? What would be the quickest way to create a robot like this? Edit: Thankyou; Frank and lanyusea. If one wants to do a simulation of the mechanical model, in others words play with the robot on the computer before actually building it (with all those gears in action), which software is most suitable for that purpose?
I am talking about robots like this one: http://www.meccanotec.com/step781b.JPG How would a person know what type of motor to use in design of such a robot? What I want to understand is that stepper motors have different step sizes, different torques among other things. How do we determine what type of stepper motor is most suitable to be used in a given robot?
I want to make a robotic workshop.. I recruited 10 members to work... Please give some tips about robotic workshop
I recently got a LYNX Biped Scout and found that it is really hard to actually come up with a working "Gait" or walking pattern. Making a servo move is easy, that's not the problem, I previously built a robotic arm from scratch (I have pictures if anyone is interested) and that one can be controlled via Arduino and a few potentiometers as it only has 4 degrees of freedom so it's not too hard to keep track of the different limbs. However the Scout is a different beast entirely. It's a purpose built kit with 12 servos and to control them I'm using the LYNX SSC-32 Sequencer which is distributed freely on their website. The only problem is that making them all move in sequence to produce a convincing walking motion is actually really hard. Has anyone got any patterns for this robot they would be happy to share?
I have a inverted inertia wheel pendulum. I suppose that if I have a wheel with larger inertia at its top, the system would be more stable. How can I prove or disprove my conjecture?
I was studying the basics of legged locomotion and came across the unilateral force and torque constraints at the foot-ground interface. I understood the implication of the unilateral constraint on the force ( the ground can only push the foot but not pull it) but I am unable to understand what does the unilateral torque constraint translate into physically in this case. Can anyone clarify it?
I am using an Arduino Mega with an MPU6050. I can get gyroscopic data as well as euler angles. The problem is my gyro data keeps going back and forth between 0 and -1 despite me not moving it at all (it stays on -1 the most). What can I do to filter what I assume is noise? I am going to use the gyro data for a quadcopter PID rate controller so I cant really have it telling me I am rotating at -1 deg/sec. That would be catastrophic for the quadcopter
I am building a humanoid robot with DC motor actuated fingers. There are 16 brushed DC motors to be position controlled with help of hall effect sensors implanted at the joints of each fingers. I need a developed driver board to control these 16, 3 watt, 12 v,DC motors. Also each motor is equipped with an incremental encoder for speed control. thank you
Basic question concerning sensor fusion: A standard 10 DoF IMU, I mean this cheap things for the tinkerer at home, provides 10 values: 3 Accelerometers 3 Gyroscope 3 Magnetic Field Measurements 1 Pressure sensor (+ 1 Temperature) I know that the accel-data provide long term stability, but are useless for short term and the gyroscope is more or less vice versa. So there are tons of strategies to "marry" this values, but how does the magnetic field measurement fit into this framework? Basically the magnetic field measurement should provide an attitude, too. Like the other two sensors combined. I guess this measurement alone is neither reliable. So how do all these sensors fit together? BR
I emphasize that my question is about sampling, not resampling. I'm reading the Probabilistic Robotics book by Thrun et al, Chapter 4 on Non-Parametric Filters. The section on Particle filters has an algorithm which states that for each particle index $m$ (see line 4): sample $x_t^{[m]} \sim p(x_t|u_t,x_{t-1}^{[m]})$ The text's explanation of this step is quoted as: Line 4. generates a hypothetical state $x_t^{[m]}$ for time t based on the particle $x_{t-1}$ and the control $u_t$. The resulting sample is index by $m$, indicating that it is generated from the $m$-th particle in $\chi_{t-1}$. This step involves sampling from the state transition distribution $p(x_t|u_t,x_{t-1})$. To implement this step, one needs to be able to sample from this distribution. The set of particles obtained after $M$ iterations is the filter's representation of $\bar{bel}(x_t)$. If I understand correctly, this step says that the m-th sampled particle $x_t^{[m]}$ is obtained by advancing the previous m-th particle with control command $u_t$. I assume that the motion is not deterministic, so the result of this motion is a conditional probability, based on the particle's previous position $u_t$. However, I'm confused over how exactly to construct this conditional probability $p(x_t|u_t,x_{t-1}^{[m]})$. Is this information usually given? Or is it constructed from the distribution of the other particles?
Using the SCI messages, I would like to determine the current operating mode or state of a iRobot Roomba 780. Finally, I would like to detect and separate four states: Cleaning In docking station Returning to docking station Error (e.g. trapped on obstacle) What is a fast and reliable way to detect those states using SCI data? The Roomba SCI Sensor packets "Remote Control Command" and "Buttons" seem to return the currently called commands and not the currently executed ones.
Suppose I have a mechanical system which is free to move on a given rail [-5m, 5m] like a motorized cart. The whole system can be mathematically expressed through a linear timeinvariant system equations. If I need to control only the position (for example saying the controller: "move to +2.3") I can simply design a PID controller that, given a set point moves the cart to that position. Now I need much more and I want to control the position and the velocity of the cart. So I need for example to say: "move to +2.3 with a specific velocity's profile". Of course the vel = 0 at the end position. Question: how should I design such a controller? Do I need specifically a special type of controller? Or I have a huge choice? Any help, graph, link and example is really appreciated. Regards
Is it possible to remote control a 'robot' relative to the driver with an angle sensor (or any other sensor)? For example, if the robot starts in this position -------------- | Front | | -------- | | |________| | [robot] | Back | -------------- and the joystick is in this configuration -------------- | Forwards | | [joystick] | | Backwards | -------------- then if the robot turns around, -------------- | Back | | -------- | | |________| | [robot] | Front | -------------- pushing the controller forwards will still make the robot go forward -------------- | ^ | | [joystick] | | Backwards | -------------- -------------- | ^ | | -------- | | |________| | [robot] | Front | -------------- even though from the robot's POV, he's going backwards. Any ideas/solutions?
So I was making sure my circuit for an airboat I was working on was safe. And checking a motor, it has 35 amps max current running 11.1v at 1000. (However my ESC has a 30 current, 40 burst amps). The recommended tested prop for this is a 11x9 3-blade and runs the motor at 20 amps. Doing some quick calculations via on online calculator, it appears to give a value of 5.4lbs of force (way off to the 2.65 lbs measured, but regardless...). When I type in a prop I want (13x6) it gives a thrust value of 7.53 . Now, if the motor current is proportional to the thrust, 5.4 lbs / 20 amps = 7.53 / running amps. And therefore the amount of amps the prop would be, in theory a little less than 30 if this is indeed the case. Thus safe for my application. This would also make sense in physics terms as Power = Current*Voltage which is proportional to thrust, but just need to make sure. So does this thought process work for choosing a prop? My device will be doing very short runs (less than 25 seconds), so near-boundaries should be safe...
I want to design a data logger for my quadcopter using the Arduino Mega board. I want to record the roll, pitch and yaw angles each second or 5 seconds, so they can be viewed later after a flight has ended. There's just one thing I don't understand, and that's how to translate the pitch/roll/yaw angles into a pulse of a specific length that the flight controller receives. For example, when I press the control for the pitch, the transmitter sends out a pulse to the receiver of the drone and the speed of the drones' motors change accordingly for it to pitch either forward or backward. I can tap into these commands between the flight controller and the transmitter, and be able to record the length of the pulse that was sent out. However, what is the link between the pitch angle and the size of the pulse? Basically, how can I convert the pulse that was recorded by the Arduino board and convert it into the pitch angle in degrees? Generally, for the transmitter I use, a 1500us-pulse means zero pitch; from 1501-2000 means pitch forward, and from 1000-1499 means pitch backwards (of course, the actual values vary slightly, but this is just a general reference for this question). So for instance, if I sent a pulse of 1400us, how would that translate into an angle in degrees? What's the formula to convert it? I hope I'm clear, and if this question sounds stupid, please excuse me, but I haven't been able to find good information on it! Thanks!
How could I tune the above PIV controller? I am trying to get the system to have a settling time of < 1 second, P.O < 15% and zero steady state error.
i'm working on a building a rover and would like some advice on selecting motors. In particular, i want to understand the difference between precision and planetary gear motors. My robot will way about 10-15lbs i think and would like it to be responsive and quick. I have two sabertooth 2x12 motor controllers (which can supply up to 12amps). I have been looking at these motors and i am not sure which is better choice for my application. These are the two sets of motors i am thinking about. https://www.servocity.com/html/precision_robotzone_gear_motor.html https://www.servocity.com/html/3-12v_precision_planetary_gear.html googling does provide some info on planetary gears, but the application of these two is still is unclear to me. Thanks
There are several robotics datasets for SLAM, like this one. In this webpage you can see that the depth image is scaled by a factor of 5000, so that float depth images can be stored in 16 bit png files: The depth images are scaled by a factor of 5000, i.e., a pixel value of 5000 in the depth image corresponds to a distance of 1 meter from the camera, 10000 to 2 meter distance, etc. A pixel value of 0 means missing value/no data. I do not understand why this value is chosen. Why not simply 1000, so that there is a conversion of meters to millimeters?
I am trying to program the create2 irobot using python. there is a script called openinterface.py. where can I download this script?