instruction
stringlengths
40
28.9k
I am interested in building a quadcopter from scratch. Because I like to err on the side of caution, I'm considering adding "safety cages" around each propeller/rotor, to hopefully prevent (at least minimize) the chance of the spinning rotor blades coming into contact with someone. Without knowing much about the physics behind how "lift" works, I would have to imagine that cages present two main problems for rotors: They add weight to the copter making it harder to lift the same payload; and They're sheer presence/surface area makes it harder for the spinning rotor to generate lift and push down away from the ground The former problem should be obvious and self-evident. For the latter problem, what I mean by "surface area" is that I imagine that the more caging around a spinning rotor, the more difficult it will be to lift effectively. For instance, a spinning rotor might have the ability to generate enough power to lift, say, 2kg. But if we were to construct an entire box (not cage) around the entire rotors, with 6 sides and no openings, I would imagine its lift capability would drop to 0kg. So obviously, what I'm interested in is a cage design that provides adequate safety but doesn't "box in" the rotor so much that it causes the rotor to be ineffective or incapable of providing lift. So I'm looking for that optimal tradeoff of safety (boxing/caging around the spinning rotor) and lift performance. I would imagine calculating and designing this is a pretty huge undertaking with a lot of math behind it. I'm just wondering if anyone has already figured all this stuff out, or if anyone knows of a way to model this safety-vs-lift-performance trade off in some way.
I'm planning the design of a wrist for a humanoid robot. I would like to choose a design that is sturdy while allowing for dexterity comparable to a human wrist. One option that was presented to me was to use a Stewart platform. This setup appears to correctly recreate all possible movements of the human hand. My immediate concern is that this platform will use a total of six actuators which will require additional power and computational requirements. I don't want to commit to this design until I am certain that there isn't a better alternative. Is a Stewart platform a good choice for replicating the dexterousness of the human wrist? If not, what is a better solution?
Is it possible to apply kinematic decoupling for a 7 DOF 7R manipulator with spherical wrist? If it is possible, can anyone suggest a reference on how to apply this approach with a redundant manipulator with spherical wrist, or explain why it is not possible? I'm working with Robotic Toolbox (matlab) and the numeric algorithm can find the inverse kinematics solution without a problem if I don't specify the orientation. And I was thinking about solving the problem a second time considering the spherical wrist. Will this approach work?
I got my hands on a few Tower Pro SG90 9G servos but cannot find their schematics or datasheet anywhere (besides that link). I have the following concerns: Looks like they're rated for 4.8V, but will they tolerate a 5V supply? How do I determine the current they require, in amps, mA, etc.? There's 3 wires: brown, red & yellow-orange, what do each of these guys do? If I had to guess I'd say that red is power, another one is direction, and another one is the position to rotate to
I'm aiming to control a motorized joint at a specific speed. To do this, I'm planning on attaching a rotary encoder to do this. I'll be controlling the motor with a PID controller. With this PID controller, I need to control the joints based on their velocity. Since: speed = distance / time It would make sense to do something like this: double getCurrentSpeed() { return (currentAngle - lastAngle) / samplingRate; } However, there's an issue; the encoder doesn't provide a high enough resolution to accurately calculate the speed (the sample rate is too high). I want to have updated data every 5-15 ms (somewhere in that range as my current motors seem to be able to respond to a change in that range) Some more information: 14 bit precision (roughly 0.0219726562 degrees per "step" of encoder I'd like to be able to calculate as small of speed differences as possible As the motors will be going fairly fast (120+ degrees/second at highly variable speeds and directions), so the feedback has to be accurate and not delayed at all So, a couple of ideas: I can find encoders that I can sample at a very high rate. I was thinking about sampling the time between the changes of the encoder's value. However, this seems finicky and likely to be noise-prone I could do some sort of rolling average, but that would cause the data values to "lag" because the previous values would "hold back" the output of the calculations somewhat and this would play with my PID loop some Noise filter of some sort, although I don't know if that would work given the rapidly changing values of this application However, none of these seem ideal. Is my only option to get a 16 bit (or higher!) encoder? Or is there another method/combination of methods that I could use to get the data I need?
I'm trying to make an hexapod with 18 servo motors and i'm asking how to control them with a Raspberry Pi. (Never used it). I saw lot's of stuff to control 1, but 18, 20... Currently I'm working on an Arduino Mega, and a SSC-32 board, but I found the result to slow and jerky. At this end, I want to add a camera and processing the image, I know an Arduino can't handle that process but a Raspberry Pi can ? Thank for all information about that subject :)
I need to get coordinates of the specific points from 2D CAD file and transform them so that I could use them to move the robotic arm to those points. The problem is that I only get x y z coordinates and the robotic arm needs x y z Tx Ty Tz coordinates to move to the certain position. Any suggestions? Edited: My task: I need robotic arm to go through certain points on PCB board and heat soldering paste. I could do it manually by setting points with pendant. But a much easier way would be to get coordinates of those points from CAD file and write a code using PC. MOVL MotionSpeedType(0 - linear mm/s, 1 - angular °/s) Speed (0.1 - 1000 mm/s or Max angular speed) coordinate X Y Z Tx Ty Tz ToolNo [Type] (move robot in a cartesian coordinates in linear motion) this is how code for linear motion to a certain point looks like I only could find this manual. This is pendant manual maybe it will be helpful. I am second year student in "Robotics and mechatronics". I'm currently in a internship at the scientific research institution. I really appreciate your help!
I have the following system: $$\dot{x} = A(t)x+B(t)u$$ $$y = x$$ $A(t)$ and $B(t)$ are actually scalar, but time-dependent. If they would be constant, I could simulate the system in Matlab using: sys = ss(A,B,C,0); lsim(sys,u,t,x0); However, it would be nice to simulate the system with dynamic state and input matrix. The matrices are based on measurement data, this means I would have for each discrete time step $t_i$ another matrix $A(t_i)$. Any suggestions how to do that?
Obviously robotic circuits draw different amounts of power/current. So given the same battery, say, a 9V, then connecting it to 2 different circuits will deplete it at two different rates. Robot/Circuit #1 might drain the battery in 5 minutes. Robot/Circuit #2 might drain the battery in 20 minutes. What ratings do batteries have that allows us to figure out how long it will power a circuit for? Bonus points: does this same rating uphold for solar panels and, in deed, all power supplies (not just batteries)?
Say I have this solar panel that outputs 6V at 330mA, or ~1.98 Watts. If I connect that to Arduino, which expects a 5V supply at (roughly) 50mA, then the Arduino as a whole requires 5V * .05A = 0.25 Watts to power it. To me, if I understand this correctly, then in perfect weather/sunlight, the solar panel will power Arduino all day long, no problem. Now let's say we wire up 4 motors to the Arduino, each of which draw 250 Watts. Now the Arduino + 4 motors are drawing ~1.25 Watts. But since the panels are still outputting 1.98 Watts, I would think that (again, under perfect sunlight) the panel would power the Arduino and motors all day long, no problem. Now we add 4 more motors to the Arduino circuit, for a total of 8 motors. The circuit is now drawing 1.25 Watts + 1 W = 2.25 Watts. I would expect the solar panel to no longer be capable of powering the circuit, at least properly. My first concern here is: am I understanding these 3 scenarios correctly? If not, where is my understanding going awry? Assuming I'm more or less on track, my next question is: can solar panels be "daisy chained" together to increase total power output? In the third case above, is there a way to add a second solar panel into the mix, effectively making the two panels output 1.98 Watts * 2 = 3.96 Watts, which would then make them capable of powering the Arduino and its 8 motors (yet again, assuming perfect weather/sunlight conditions)?
I'm looking for my robotics project to draw its power from one of 3 rechargeable batteries; basically whichever has the most "juice" in it. From the initial research I've already done, I believe I could connect each rechargeable battery (probably LiPo) to a diode, and then wire each of the 3 diodes in series. However, being so new to robotics/electronics, I guess I wanted to bounce this off the community as a sanity check, or to see if there is a better way of achieving this. Again, what I am looking for is a way for the circuit to automagically detect that battery #1 has more power than battery #2, and so it "decides" to draw power from #1. The instant #1 is depleted or deemed "less powerful" than #2, the #2 battery takes over. Thoughts/criticisms?
I have a calibrated stereo camera system that is mounted in a passenger car which means I am able to retrieve a point cloud from my stereo image. However, I need to find how well is the camera aligned with the vehicle - read: if the camera is perfectly facing forwards or not. I guess it will never perfectly face forwards so I need to get the angle (or rather 3D vector) between "perfect forwards" and "actual camera pose". What came to my mind is to drive the vehicle possibly perfectly forwards and use stereo visual odometry to detect the angle of vehicle movement as seen by camera (which is the vector I am looking for). The LIBVISO library for visual odometry can output a 3D vector of movement change from one stereo frame to another which could be used to detect the needed vector. The only problem may be to actually be able to drive perfectly forward with a car. Maybe an RTK GPS could be used to check for this or for correction. Will anyone have a suggestion on how to proceed? The stereo camera I use consists of 2 separate Point Grey USB cameras. Each camera is mounted on a windshield inside the car with a mount like this one. The cameras were calibrated after mounting. The stereo baseline (distance between the cameras) is about 50 cm.
I would like to build a mechanical module that acts like a spring with electronically controllable stiffness (spring rate). For instance, let's imagine a solid, metallic cube, 0.5 m each side. On the top side of the cube, there is a chair sitting on top of a solid mechanical spring. When you sit on the chair, it would go down proportionally to your weight, and inversely proportional to the spring's rate. What I want is that this spring's rate be electronically adjustable in real time, for instance a microcontroller system might increase the spring's rate when it detects a larger weight. I'm using this example to best describe what I want to achieve because I'm not a robotics specialist and I don't know the inside terms. Is there already an electro-mechanic module as the one I'm describing? (obviously nevermind the cube and the chair, it's the spring I'm interested in).
I would like to build a small two-wheeled robot similar to the one shown here. In order to keep the robot small, I intend to use two coreless micro motors like the one shown bellow. The power source would be 2 AAA or AA batteries, in order to reach 3 V. These batteries would represent the bulk of the weight of the robot. The rest of the robot would be virtually weightless. The specifications of one of such motor are: Motor diameter: 6 mm Motor length: 12 mm Output shaft: 0.8 mm Output shaft length: 4 mm Voltage: 3 V Current: 17 mA (stall 120 mA) Frequency​​: 22000 RPM My question is if small DC motors of this type have enough torque to even make the robot start moving. I have been unable to find torque info on these kind of motors and I suspect the weight of the robot could be too much for them to handle. Do you know the typical torque of such motor? Is there another type of (cheap) motor more appropriate for this project?
I am combining two position measurements of a ball from two sensors in real time to obtain one triangulated position in x,y,z coordinates. As the data exchange of the measurements carries some latency, the data has to be extrapolated be able to obtain the current position. Due to extrapolation an error appears in the triangulated data. I know that when the ball is in the air, the velocity of the ball should be constant in x and y directions and the velocity in the z direction should decay with g. The velocities in x and y however oscillate as function of time around a mean value which is the actual x respectively y velocity. The same goes for when I compute the acceleration in the z direction. It oscillates as function of time around g. Given that I know how the ball should behave, i.e. that vx and vy should be constant and that the acceleration in the z direction should be z, how can I impose these conditions to better estimate the triangulated position?
Can I use Bipolar stepper motor driver to drive Unipolar motor in Unipolar configuration ?
I am trying to control the velocity + position of a linear actuator. At this moment I am able to control the position or the velocity. But I'm trying to control both. What the control has to do: Let the linear actuator drive to a position i.e. 0 to 100 cm with a constant velocity of 1cm/s. I control the actuator using a PWM signal. And I measure the velocity and position using a position sensor on the shaft. What kind of control is preferred, PID in cascade? If so, what would the code look like? Any other kind of control would function better? EDIT: A more describing picture. I want a velocity controlled position controller. Hopefully this will make it clear EDIT My first try is with a trapezoid wave. Maybe there is an easy way without too much calculation power to change it to an s-curve. Then the acceleration/jerk will be a lot smoother. I let the microcontroller calculate 3 different formulas, afterwards it will calculate it using loop iteration. This way I can use one PID for the position. The parameters in the following code will fictional: AccelerationLoops: 5 //[Loops] Velocity: 100 //[mm/s] DeltaPosition: 7.5 //[mm] Looptime: 5 //[ms] Loopfactor: 1000 / Looptime //[-] VelocityLoop: Velocity /Loopfactor //[mm/loop] VelocityFactor: VelocityLoop * .5 / AccelerationLoops //[mm/loop] (.5 found by integration) Loops: DeltaPosition / VelocityLoop / AccelerationLoops //[Loops] Formula Formula1: VelocityFactor * x^2 LastF1: Last value of Formula1 Formula1(5) Formula2: VelocityLoop * x - LastF1 Formula3: VelocityFactor * (Loops - x)^2 + DeltaPosition) Using the parameters of above it will generate the following setpoint: 0 0,00 1 0,05 2 0,20 3 0,45 4 0,80 5 1,25 6 1,75 7 2,25 8 2,75 9 3,25 10 3,75 11 4,25 12 4,75 13 5,25 14 5,75 15 6,25 16 6,70 17 7,05 18 7,30 19 7,45 20 7,50 A big problem with the code above is that the amount of acceleration loops is a constant. It cannot be changed except when you already know the amount of loops it will take. I will be using two separate Arduinos, they will be connected using a CAN-bus connection. Anyway, they won't communicate through it unless the load becomes too high. This will make master/slave impossible. Also the system has to be modular: adding another actuator to the circuit won't be a problem. The actuator is speed controlled by using a PWM signal. The linear sensor will deliver a 0-10v signal which i will reduce to 0-5v by a simple voltage divider. The loop will be around 5 to 10 ms, will depend on the maximum looptime. Arduino has a 10-bit (0-1023) ADC but use of oversampling I will probably try to increase it to 12-bit. To not decrease the reading speed I will decrease the prescaler of the ADC. The PWM output is 8-bit (0-255), I am trying to find a way to further increase. Because I think 255 steps are too low for my application. Because the Arduino has limit internal memory, precalculating all the positions is impossible. Thank you all for the help so far!
I'm working on an robot that would be able to navigate through a maze, avoid obstacles and identify some of the objects (Boxes in which it has to pot the balls) in it. I have a monochromatic bitmap of the maze, that is supposed to be used in the robot navigation. Up till now, I have converted/read the bitmap image of the maze into a 2D array of bits. Right now I am writing a code that should convert the 2D array (that represents the maze) into a connectivity map so that I could apply a path planning algorithm on it. Mr. @Chuck has helped me by providing a code in MATLAB. i have converted that code into C++, however the code isn't providing the right output. Kindly see the code and tell me what I am doing wrong. I am sharing the link to the 2D array that has been made, the MATLAB code, and my code in C++ to convert the array into a connectivity map. Link to the 2D array:- https://drive.google.com/file/d/0BwUKS98DxycUZDZwTVYzY0lueFU/view?usp=sharing MATLAB CODE:- Map = load(map.mat); nRows = size(Map,1); nCols = size(Map,2); mapSize = size(Map); N = numel(Map); Digraph = zeros(N, N); for i = 1:nRows for j = 1:nCols currentPos = sub2ind(mapSize,i,j); % left neighbor, if it exists if (j-1)> 0 destPos = sub2ind (mapSize,i,j-1); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % right neighbor, if it exists if (j+1)<=nCols destPos = sub2ind (mapSize,i,j+1); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % top neighbor, if it exists if (i-1)> 0 destPos = sub2ind (mapSize,i-1,j); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end % bottom neighbor, if it exists if (i+1)<=nRows destPos = sub2ind (mapSize,i+1,j); Digraph(currentPos,destPos) = Map(currentPos)*Map(destPos); end end end Code in C++:- int **digraph = NULL; digraph = new int *[6144]; for (int i = 0; i < 6144; i++) { digraph[i] = new int[6144]; } for (j = 0; j < 96; j++) { for (z = 0; z < 64; z++) { currentPos = sub2ind[j][z]; digraph[currentPos][currentPos] = 0; //------NEW ADDITION----------- if ((z - 1) >= 0) { destPos = sub2ind[j][z - 1]; digraph[currentPos][destPos] = bitarray[j][z] * bitarray[j][z - 1]; } if ((z + 1) < 64) { destPos = sub2ind[j][z + 1]; digraph[currentPos][destPos] = bitarray[j][z] * bitarray[j][z + 1]; } if ((j - 1) >= 0) { destPos = sub2ind[j - 1][z]; digraph[currentPos][destPos] = bitarray[j][z] * bitarray[j - 1][z]; } if ((j + 1) < 96) { destPos = sub2ind[j + 1][z]; digraph[currentPos][destPos] = bitarray[j][z] * bitarray[j + 1][z]; } } } ofstream connectivityMap; connectivityMap.open("diGraph.txt"); for (int l = 0; j < 100; l++) // printing only 100 elements { for (int k = 0; k < 100; k++) { connectivityMap << digraph[l][k] << " "; } }
I want to design some circuits of my own. My area of expertise is in Computer Science Engineering. I have listed out the components which are essential in the circuit. I want a software which can be used to design and simulate circuits for real time projects. Please suggest me the best among them. Thank you. @AkhilRajagopal
I have a sensor reduction model which gives me a velocity estimate of a suspension system(velocity 1) . This suspension system estimate velocity is used to calculate another velocity(velocity 2) via a transfer function/plant model. Can I use velocity 2 to improve my velocity estimate (velocity 1) through Kalman filtering or through some feedback system.?? V1 is "estimated" using these two sensors.That is fed into a geroter pump (Fs in diagram) which pumps fluid to manupulate the damper viscous fluid thereby applying resistance to the forces applied to the car body. There is no problem did I have an velocity sensor on the spring.I could measure it accurately but now I only have an estimate. I am trying to make the estimate better.Assume I have a model/plant or transfer function already that gives me the V2 given a V1.
What are these frequencies used for within the drone technology, and why these values? 35 MHz 433 MHz 868 MHz 2.4 GHz 5.8 GHz
I am trying to understand the implementation of Extended Kalman Filter for SLAM using a single, agile RGB camera. The vector describing the camera pose is $$ \begin{pmatrix} r^W \\ q^W \\ V^W \\ \omega^R \\ a^W \\ \alpha^R \end{pmatrix} $$ where: $r^W$ : 3D coordinates of camera w.r.t world $q^W$ : unit quaternion describing camera pose w.r.t world $V^W$ : linear velocity along three coordinate frames, w.r.t world $\omega$ : angular velocity w.r.t body frame of camera The feature vector set is described as $$ \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix} $$ where, each feature point is described using XYZ parameters. For the EKF acting under an unknown linear and angular acceleration $[A^W,\psi^R] $ , the process model used for predicting the next state is: $$ \begin{pmatrix} r^W + V^W\Delta t + \frac{1}{2}\bigl(a^W + A^W\bigr)\Delta t^2 \\ q^W \bigotimes q^W\bigl(\omega^R\Delta t + \frac{1}{2}\bigl(\alpha^R + \psi^R\bigr)\Delta t^2\bigr) \\ V^W + \bigl(a^W + A^W\bigr)\Delta t\\ \omega^R + \bigl(\alpha^R + \psi^R\bigr)\Delta t \\ a^W + A^W \\ \alpha^R + \psi^R \end{pmatrix} $$ So far, I'm clear with the EKF steps. Post this prediction step, I'm not clear how to perform the measurement update of the system state. From this slide, I was under the impression that we need to initialize random depth particles between 0.5m to 5m from the camera. But, at this point, both the camera pose and the feature depth is unknown. I can understand running a particle filter for estimating feature depth if camera pose is known. I tried to implement such a concept in this project: where I read the camera pose from a ground truth file and keep triangulating the depth of features w.r.t world reference frame I can also comprehend running a particle filter for estimating the camera pose if feature depths are known. But both these parameters are unknown. How do I perform the measurement update? I can understand narrowing down the active search region for feature matching based on the predicted next state of the camera. But after the features are matched using RANSAC (or any other algorithm), how do I find the updated camera pose? We are not estimating homography, are we? If you have any idea regarding MonoSLAM (or RGB-D SLAM), please help me out with understanding the EKF steps. To be more specific: is there a homography estimation step in the algorithm? how do we project the epipolar line (inverse depth OR XYZ) in the next frame if we do not have any estimate of the camera motion?
I have a differential drive robot for which I'm building an EKF localization system. I would like to be able to estimate the state of the robot $\left[ x, y, \theta, v, \omega \right]$ where $x, y, \theta$ represent the pose of the robot in global coordinates, and $v, \omega$ are the translational and rotational velocities. Every mobile robot Kalman filter example I've seen uses these velocities as inputs to prediction phase, and does not provide a filtered estimate of them. Q: What is the best way to structure a filter so that I can estimate my velocities and use my measured odometry, gyroscope, and possibly accelerometers (adding $\dot{v}$ and $\dot{\omega}$ to my state) as inputs? My intuition tells me to use a prediction step that is pure feedforward (i.e. just integrates the predicted velocities into the positions), and then have separate updates for odometry, gyro, and accelerometer, but I have never seen anyone do this before. Does this seem like a reasonable approach?
I have a term project which is controlling a two-link manipulator with harmonic drive installed at each joint. To control, i used Computed control method to determine the torque needed for each joints based on the formula: $$\tau_i =M(\theta)(\ddot{\theta_i}+K_d\dot{e}+K_pe)+V+G $$ To calculate the torque that each motor needs to produce through harmonic drive, i use: $$\tau_{motor} =(J_m+J_g)\rho\ddot{\theta_i}+\frac{\tau_i}{\rho\eta_g}$$ where: $\rho$ and $\eta_g$ are gear ratio and efficiency of the harmonic drive. $J_m$ and $J_g$ are the motor and gear inertia, respectively. after these calculation, i can see the effect of harmonic drive in the system by comparing input torque from motor in the model with harmonic drive ($\tau_{motor}$) to that torque in the model without harmonic drive ($\tau_i$) But my professor doesn't agree the formula $\tau_{motor}$ i used. He want me to include the stiffness $k$ of the harmonic drive. This is what i have done P/S: This model which consists of two-link manipulator+harmonic drive at each joint is built in MATLAB. Can anyone suggest me the formula about it? Thank you so much.
I'm attempting to control a small vehicle at relatively slow (.5 m/s - 1 m/s) speeds, but with extreme accuracy (1mm). For the drive system, I'm considering using brushless motors as they have a much greater power / volume ratio than I am able to find with brushed motors, especially at this small size. I will be using wheels between 1" and 2" diameter, so the RPM I will be looking for is between 150 - 500 RPM at max. This would suggest either driving the motors at a low speed directly, or driving them at a high speed and gearing them down. As I understand it, both setups will give high torques, as brushless motors decrease torque with speed. With brushed motors, it's quite obvious that a gearbox is necessary as otherwise there is no torque in the system, but here the choice isn't as clear, which is why I am asking. tl;dr Use brushless motors at high speed with gearbox or low speed (ungeared) for high torque / low speed / high precision application?
Aim: To use multi-threading and inter-process communication(IPC) when coding an autonomous robot. Platform: Embedded Linux (Yocto) Constraints : Limited CPU power. We are building an Autonomous Underwater Vehicle, to compete in the RoboSub competition. This is the first time I am doing something like this. I intent to use a middleware like ROS, MIRA, YART, MOOS etc. The purpose of using one is that I want to modularise tasks, and divide the core components into subsystems, which should be run parallel(by multi-threading). But I have limited computational power (a dual core omap SoC), and the middleware, while robust should also be very efficient. I need to use a middleware, because I don't want the program to be run on a single thread. My CPU has two cores, and it would be great if I could do some multi-threading to improve performance of the program. The middleware will provide for me the communication layer, so I don't have to worry about data races, or other problems associated with parallel processing. Also I have no prior experience writing multi-threaded programs, and so using parallel processing libraries directly would be difficult. Hence IMO, middlewares are excellent choices. In your experience, which is the best one suited for the task. I don't really want to use ROS, because it will be having a lot of features, and I wont be using them. I am a computer science student(under graduate freshman, actually) and don't mind getting my hands dirty with one which has not that much features. That's true if only it will take less toll on the CPU.
Has anyone done this with EKF/PID on a small microcontroller? Or know of code snippets to help implementing this?
I am working on my first hobby project and I'm not very familiar with sensors yet. I am trying to build a system which detects the presence of a person in a small room with a single entrance/exit door. The idea is when the first person enters the room, the lights turn on and any following person doesn't affect state of the lights. After the last person leaves, the lights should turn off. In a programmatic sense, the lights should turn on when present person count is greater than 0. I have explored my options and found out that infrared sensors are usually used for this type of problem. What I am not sure is how to detect whether person has entered or left, so I would like to ask for some help with this.
From Introduction to Robotics by J.J. Craig, chapter 2, Page no. 36: Could anyone explain how that equation was derived/formed? I am stuck on this page due to failing to understand where the equation came from. Thank you.
I'm no professional. At 29 I just became seriously interested in robotics a few months ago and have been researching everything I can since. Now that I've come to understand how far robotics have truly come I have a desire to try to make my own. Granted, I know nothing about coding or programming. I have no idea where to begin. And I know it'll probably, the first time at least, be something small rather than a huge life altering project. Thus, if anyone could suggest to me good resources for a beginner I'd massively appreciate it.
Let's say a PID is implemented and the errors are calculated using the sensor data, but the sensor data lags by certain amount of time because of the overhead. And the lag time is smaller than the sampling period, how well does PID performs? What I am thinking is that PID will calculate errors based on past data, and use that to control. How will using a Kalman filter to estimate the actual sensor data help?
This is a question for those of you who have experience using stereo cameras/modules like the ZED, DUO M, Bumblebee cameras, etc. (not TOF cameras). I can't find any sample disparity outputs out there on the internet, and I can't find any information on how they perform. Basically here are a few things I'd like to know to those of you who used any of the cameras mentioned above (and others) What resolution and no. of disparities did you work with? How was the framerate? On what hardware? Did the camera have an ASIC of some sort to produce the disparity maps, or did it require a host? How was the quality? For those who used the ZED camera, there is a promotional video on youtube. Are the disparity maps really that good?
I understand the concept of using a pull-up/pull-down resistor when implementing a button/switch with Arduino to avoid a floating state, and in fact I have implemented this quite often. But I am not too sure if a pull-down resistor is necessary in chip-chip or chip-sensor communication. I am connecting a coin acceptor to the Arduino (common ground). The coin acceptor's output pin gives a short pulse each time there is a coin inserted. So far I am connecting the output pin of the coin acceptor directly to an Arduino pin and it works without any problem. Is a pull-down resistor (on this line) usually required as precaution in this case? Also I have the same question when connecting 2 pins of 2 separate Arduino's (also common ground) so that one Arduino can read pulses from the other. Thanks in advance for any experience shared! Dave
I had a RC Helicopter (with video,picture, and audio taking capabilities) that recently "died" (unrelated to short circuit). The reciever board short circuted, but the board that sent data to micro-sd card and had camera+mic was fine. I can access the data on the micro-sd card through the circuit, with a USB cable. The reciever board sent data via a 4 wire bundle to the camera board to make it take pictures/record audio. Is there any way to still do this from my computer (from the USB), and turn it into a mini spy camera? (Not remotely, jst through a cable) I got this heli a while back so I don't have the heli number but the camera board number is TX6473 R1, and the reciever board number is 3319B rev.a Reciever Board Image Camera/Data Board Image
I recently got libfreenect running on my mac and was able to test out freenect-glpclview which uses some of the 3D capabilities of the depth sensor. I noticed that the Kinect would only respond / pick up movement that happened within a range of about 3-6 inches in front of the sensor. I thought this may be because the lights where on so I turned them off. It seemed to get a little better but it still only "works" if something is block the sensor almost completely. Does anyone know if this is something that can be solved? I know it's an old sensor but I got it for $20 so I could do some prototyping with it. Notes: laser project is ON light starts out blinking then goes solid green when not level light goes red RGB camera works but is a little choppy and sometimes shows tears in the picture. freenect-glcplview output (snippet): [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 70] Expected 1748 data bytes, but got 948 [Stream 70] Expected max 1748 data bytes, but got 1908. Dropping... [Stream 70] Expected max 1748 data bytes, but got 1908. Dropping... [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 freenect-regview output (snippet) [Stream 70] Invalid magic 2dc5 [Stream 70] Invalid magic aaf5 [Stream 70] Invalid magic dddb [Stream 70] Invalid magic 9272 [Stream 70] Invalid magic 9873 [Stream 70] Invalid magic 9b8b [Stream 70] Invalid magic 59eb [Stream 70] Invalid magic 88f1 [Stream 70] Invalid magic 75ee [Stream 70] Invalid magic ffff [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Lost 1 packets [Stream 80] Lost 15244 total packets in 514 frames (29.657587 lppf) [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Expected 1908 data bytes, but got 948 [Stream 80] Invalid magic 3b46 [Stream 80] Lost 1 packets Found this which gives me the idea that this may be a USB issue: Regular receipt of undersized packet.
I'm developing a project consists of an IMU controlled by Arduino through which you can send via a radio module, the data to the PC of the three Euler angles and raw data from the sensors. For filtering I used the code made available by SparkFun: Razor AHRS 9 dof https://github.com/ptrbrtz/razor-9dof-ahrs/tree/master/Arduino/Razor_AHRS The code does not provide radio transmissions and is tuned for 50 Hz sampling rate, in fact its parameters are: // DCM parameters #define Kp_ROLLPITCH (0.02f) #define Ki_ROLLPITCH (0.00002f) #define Kp_YAW (1.2f) #define Ki_YAW (0.00002f) in this project data is read every 20ms (50Hz) and records of the sensors are set to the accelerometer odr 50hz and 25 bandwidth. with the gyroscope 50 Hz odr. In my project I used a gyroscope different, namely that I used L3G4200D frequency odr starting at 100Hz, I set then registers with the 100Hz. My global data rate is 33Hz max, beacouse the use of a radio, i read the complete date with a frequency of 33Hz. How can i tune the Ki and Kp of my setup? the Kp is the period, I have to consider the frequency odr that I set to register in the individual sensors or i have to set the global system sample rate limited to 33Hz by the radio transmission?
Basically I got system with a sensor and an output. I want to apply a digital implemented feedback controller. The problem in this setup is the sensor. The specifications of the module says that the sampletime of the sensor does change in wide range, depending on the usecase; from 1.3 second to 10 second. But it stays constant until the system is disabled. My first approach was tuning a digital PID-Controller for the longest sampletime. This works fine. Even if I change the sampletime to the shortest the system stays stable, which was expected because I'm still in ROC. The problem now is that the system's response is pretty slow. If I design the controller for my fastest samplingrate the results are satisfying but become instable for the slowest samplerate, which can be explained again by the ROC I could use some kind of adaptive predefined gains which I change depending on the samplerate but I was wondering if there are control strategies which are able to handle the sampletime changes? EDIT: To give a better overview I will add some details: I'm talking about a heating system which heats with radiation. As a sensor I use a pyrometer module with a samplingrate of up 1kHz. The problem is, that the pyrometer is not able to produce reasonable readings whenever the radiator is turned on. (Yes there are other alternatives to the pyrometer, but they start at $50k and are too expensive). The radiator has to be pulsed to operate it. So to maintain a decent heat up time and steady-state temperature the "duty-cycle" has to be at a decent rate(target is 95%). The minimum "off-time" of the radiator is 0.2 seconds before the measured values are reasonable. So at the end my sensor got an effective sampletime of 1-10seconds (by varying the duty cycle). The hardware is hard too change, radiator and sensor have been evaluted for months right now. Therefore I try to improve the results by "just" changing the control algorithm.
I am building an autonomous underwater robot. It will be used in swimming pools. It should be capable of running in any normal sized pool, not just the pool in which I test. So I cannot rely on a particular design or feature. It has to know it's position in the pool, either with respect to the initial position or with respect to the pool. I have a IMU, which is a Pololu MiniIMU but finding the displacement with an IMU is a near impossible task. What sensor can I use for this task? It should not be very expensive. (below 200$) Tank size: 25x20x2.5 meters
I know that we can use some algorithms like LQR, MPC, or even PID to make the robot follows the trajectory references. In the simulation like MATLAB, I usually specify the trajectory reference by a function. Let say, given a sequence of points generated by a path planning algorithm, then I want to do a real experiment of trajectory tracking over those sequence of points. My question is: - How to specify the errors towards the path in real situation. My impression is the generated path by path planning algorithm is uncertain due to the error of the robot sensing. And unlike the line following robot which has a real physical line for the reference, the generated path from path planning is virtual, e.g. it does not exist in the real world. I am really confused about these matter.
I'm studying Introduction to robotic and found there is different equations to determine the position and orientation for the end effector of a robot using DH parameters transformation matrix, they are : Example: Puma 560, All joints are revolute Forward Kinematics: Given :The manipulator geometrical parameters. Specify: The position and orientation of manipulator. Solution: For Step 4: for step 3 :Here I'm confused Here we should calculate the transformation matrix for each link and then multiply them to get the position and orientation for the end effector. I've seen different articles using one of these equations when they get to this step for the same robot(puma 560) What is the difference between them? Will the result be different? Which one should I use when calculating the position and orientation?
I am fairly new to the DH-transformation and I have difficulties to understand how it works. Why are not all coordinates (X+Y+Z) incorporated into the parameters? It seems to me that at least one information is useless/goes to the trash, since there is only a, d (translatory information) and alpha, theta(rotatory information). Example: The transition between two coordinate systems with identical orientation(alpha=0, theta=0) but with different coordinates(x1!=x2, y1!=y2, z1!=z2). DH only makes use of a maximum of two of these information. Please enlighten me! Greetings :EDIT: To clarify which part of the DH-Transform I don't understand, here is an example. Imagine a CNC-Mill(COS1) on a stand(COS0) without any variable length(=no motion) between COS0-COS1. For some reason I need to incorporate the transformation from COS0-COS1(=T0-1) into the forward transformation of my CNC-Mill. DH-Parameters for T0-1 would be a=5mm, alpha=90°, d=2mm and theta=90°. Assuming this is correct, the dX=10mm information is lost during this process? If I recreate the relation between COS0 and COS1 according to the DH-Parameters, I end up like this: As far as I understand, on non parallel axis the information is not lost because the measurement of a/d would be diagonal, therefore include either dX/dY, dX/dZ or dY/dZ(pythagorean theorem) in one parameter. Where is the flaw in my logic?
I recently decided to build a quadricopter from scratch using Arduino and now I'm faced with an orientation estimation problem. I bought a cheap 10DOF sensor with 3 axis magnetometer, 3 axis accelerometer, 3 axis gyro and a barometer and the complementary filter that I use to get orientation returns usable but noisy values. I tried the Madgwick fusion filter too, but it returns unstable values that diverges from the ones I get with complementary filter. Given that the Madgwick filter implementation is correct, I pass acceleration values measured in Gs, gyro values measured in rps (radians per second) and Magnetometer values measured in uT, while sampling time is the same of my loop cycle. Is there anything I have missed? Is there any advantage using Kalman filter? EDIT1: My problem was due to an wrong choice of sampling time and now seems to work, but convergence is very very slow (i.e. it takes about 3 seconds to reach the right value after a quick flip of the IMU). Rising value of Kp adds to much noise. I also tried to repeat filter update step more than once per cycle but it requires too much time exceeding the sampling time. Here some graphs, from top to bottom Complementary filter, Madgwick filter and Madgwick filter with high Kp: EDIT2: Different values probably are caused by cable plug and unplug. Anyway raw data example from my sensor can be downloaded here
I want to develop an autonomous driving RC car. For detecting obstacles, I plan to mount 3-5 ultrasonic sensors in the front and in the back the car. What is the minimum necessary combined field of view of the sensors so the car never hits an obstacle? I.e. what is the minimum angle of detection of the combined sensors the car should have to detect any obstacle in its path? Some data about the car: (I don't know whether all the data is relevant) Separation between right and left wheel : 19,5 cm Wheelbase (distance between the front and the back wheels): 31,3cm Steering axle: front. Maximum angle of steering: around 30 degrees. The car uses Ackermann steering
I would like to start experimenting with Robots. Is Lego Mindstorm a good start? Should I consider other platforms?
I am trying to use Invensense's MPU9250. I am using provided library to read euler angle. When the IMU rotates about one axis, angles about other two axes change too. What could be potential cause to it?
I created a program to simple time base delay (in seconds). I have problem: How to read a interrupt flag from channel 1 etc.? When I use if(__HAL_TIM_GET_FLAG(&htim2, TIM_FLAG_CC1) != RESET) an error occurs. When the interrupt occurred , uC should clear flag and set Blue LED in Discovery Board. Here is my program: Main.c /* Includes */ #include "stm32f3xx_hal.h" /* Private variables */ TIM_HandleTypeDef htim2; /* Private function prototypes */ void SystemClock_Config(void); static void MX_GPIO_Init(void); static void MX_TIM2_Init(void); int main(void) { /* MCU Configuration----------------------------------------------------------*/ /* Reset of all peripherals, Initializes the Flash interface and the Systick. */ HAL_Init(); /* Configure the system clock */ SystemClock_Config(); /* Initialize all configured peripherals */ MX_GPIO_Init(); MX_TIM2_Init(); /* Infinite loop */ while (1) { HAL_GPIO_WritePin(GPIOE,GPIO_PIN_11,GPIO_PIN_RESET); } } /** System Clock Configuration*/ void SystemClock_Config(void) { RCC_OscInitTypeDef RCC_OscInitStruct; RCC_ClkInitTypeDef RCC_ClkInitStruct; RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSE; RCC_OscInitStruct.HSEState = RCC_HSE_ON; RCC_OscInitStruct.HSEPredivValue = RCC_HSE_PREDIV_DIV1; RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON; RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSE; RCC_OscInitStruct.PLL.PLLMUL = RCC_PLL_MUL9; HAL_RCC_OscConfig(&RCC_OscInitStruct); RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_SYSCLK|RCC_CLOCKTYPE_PCLK1; RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_PLLCLK; RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV1; RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV2; RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV1; HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_2); HAL_SYSTICK_Config(HAL_RCC_GetHCLKFreq()/1000); HAL_SYSTICK_CLKSourceConfig(SYSTICK_CLKSOURCE_HCLK); } /* TIM2 init function */ void MX_TIM2_Init(void) { TIM_ClockConfigTypeDef sClockSourceConfig; TIM_MasterConfigTypeDef sMasterConfig; TIM_OC_InitTypeDef sConfigOC; htim2.Instance = TIM2; htim2.Init.Prescaler = 7199; //72Mhz/7200 htim2.Init.CounterMode = TIM_COUNTERMODE_UP; htim2.Init.Period = 65535; htim2.Init.ClockDivision = TIM_CLOCKDIVISION_DIV1; HAL_TIM_Base_Init(&htim2); sClockSourceConfig.ClockSource = TIM_CLOCKSOURCE_INTERNAL; HAL_TIM_ConfigClockSource(&htim2, &sClockSourceConfig); HAL_TIM_OC_Init(&htim2); sMasterConfig.MasterOutputTrigger = TIM_TRGO_RESET; sMasterConfig.MasterSlaveMode = TIM_MASTERSLAVEMODE_DISABLE; HAL_TIMEx_MasterConfigSynchronization(&htim2, &sMasterConfig); sConfigOC.OCMode = TIM_OCMODE_TIMING; sConfigOC.Pulse = 20000; //0.0001[s] * 20000 = 2 [s] DELAY sConfigOC.OCPolarity = TIM_OCPOLARITY_HIGH; sConfigOC.OCFastMode = TIM_OCFAST_DISABLE; HAL_TIM_OC_ConfigChannel(&htim2, &sConfigOC, TIM_CHANNEL_1); sConfigOC.OCMode = TIM_OCMODE_TIMING; sConfigOC.Pulse = 30000; sConfigOC.OCPolarity = TIM_OCPOLARITY_HIGH; sConfigOC.OCFastMode = TIM_OCFAST_DISABLE; HAL_TIM_OC_ConfigChannel(&htim2, &sConfigOC, TIM_CHANNEL_2); HAL_TIM_Base_Start_IT(&htim2); HAL_TIM_OC_Start_IT(&htim2,TIM_CHANNEL_1 ); //HAL_TIM_OC_Start_IT(&htim2,TIM_CHANNEL_2 ); } /** Configure pins as * Analog * Input * Output * EVENT_OUT * EXTI PC9 ------> I2S_CKIN */ void MX_GPIO_Init(void) { GPIO_InitTypeDef GPIO_InitStruct; /* GPIO Ports Clock Enable */ __GPIOF_CLK_ENABLE(); __GPIOC_CLK_ENABLE(); __GPIOE_CLK_ENABLE(); /*Configure GPIO pin : PC9 */ GPIO_InitStruct.Pin = GPIO_PIN_9; GPIO_InitStruct.Mode = GPIO_MODE_AF_PP; GPIO_InitStruct.Pull = GPIO_NOPULL; GPIO_InitStruct.Speed = GPIO_SPEED_HIGH; GPIO_InitStruct.Alternate = GPIO_AF5_SPI1; HAL_GPIO_Init(GPIOC, &GPIO_InitStruct); /* * Configure GPIO pin : PE8 BLUE LED */ GPIO_InitStruct.Pin=GPIO_PIN_8; GPIO_InitStruct.Mode=GPIO_MODE_OUTPUT_PP; GPIO_InitStruct.Pull=GPIO_NOPULL; GPIO_InitStruct.Speed=GPIO_SPEED_HIGH; HAL_GPIO_Init(GPIOE,&GPIO_InitStruct); GPIO_InitStruct.Pin=GPIO_PIN_12; GPIO_InitStruct.Mode=GPIO_MODE_OUTPUT_PP; GPIO_InitStruct.Pull=GPIO_NOPULL; GPIO_InitStruct.Speed=GPIO_SPEED_HIGH; HAL_GPIO_Init(GPIOE,&GPIO_InitStruct); /* * COnfigure GPIO pin : PE11 GREEN LED */ GPIO_InitStruct.Pin=GPIO_PIN_11; GPIO_InitStruct.Mode=GPIO_MODE_OUTPUT_PP; GPIO_InitStruct.Pull=GPIO_NOPULL; GPIO_InitStruct.Speed=GPIO_SPEED_HIGH; HAL_GPIO_Init(GPIOE,&GPIO_InitStruct); } low level implementation: void HAL_TIM_Base_MspInit(TIM_HandleTypeDef* htim_base) { if(htim_base->Instance==TIM2) { /* Peripheral clock enable */ __TIM2_CLK_ENABLE(); /* Peripheral interrupt init*/ HAL_NVIC_SetPriority(TIM2_IRQn, 0, 0); HAL_NVIC_EnableIRQ(TIM2_IRQn); } } void TIM2_IRQHandler(void) { /* USER CODE BEGIN TIM2_IRQn 0 */ HAL_GPIO_WritePin(GPIOE,GPIO_PIN_8,GPIO_PIN_SET); // HAL_GPIO_TogglePin(GPIOE,GPIO_PIN_12); HAL_TIM_IRQHandler(&htim2); //THis function is implemented by StmCubeMX , WHAT IS THIS? } So how should my TIM2_IRQHandler look like? Each channel generate delay in +1 sec. When I am debugging this program, when LED is set the period is equal to 1s (time for set LED).
I got an OWI Robotic arm, but was slightly disappointed at it having only horizontal position for gripper. What would be the easiest way to extend with gripper/wrist rotation, i.e. 6th degree of freedom?
I've got Robotics API library, a demo program and a robot. I want to develop an app for it. The best solution is offline development on some kind of simulator. I'm completely new in such tasks - is there any IDE for this? Or a way do deliver byte-code to machine? Thanks in advance!
Can ESC in quads be programmed in such a way that only one side has throttle and no throttle at all on the other? This would cause the quad to flip I suppose? With that, is there a way we can program the controller to like trigger a switch when we want the quad to flip? Because I was thinking of doing a waterproof quad. So initially, it flies in the air normally with the 4 channel, and then I set it to float on water. After that, I was thinking of maybe triggering a switch on the controller so that this time it's just going to flip and nothing else. After it flips, I would trigger the switch back to normal operation. Is that possible?
Recently I've bought a hexapod kit and 18 TowerPro MG995 servos. My objective is to apply also the Pi camera, sensors and perhaps a claw... So I've been researching and I haven't found a clear answer when comes to the servo control board. Which servo controller board shall I choose to complete my project?
I have been researching on a cost-effective way to scan an area on a MAV (exploraton) and later use it for CAD/civil purposes(use the point cloud data for CAD) but the major sensors available have their own problems. kinect - can't use outside,high computation power stereo - high computation power,somewhat expensive lidar - very expensive + not real time + heavy I need a system(on the MAV/quadrotor) that can work over wifi/wireless, can scan outdoors , not very expensive and that gives data real-time.Please suggest a system that can be as close to the above requirements. Also can stereo be operated over wifi?
I’m using the BMA020 (from ELV) with my Arduino Mega2560 and trying to read acceleration values that doesn’t confuse me. First I connected the sensor in SPI-4 mode. Means CSB <-> PB0 (SS) SCK <-> PB1 (SCK) SDI <-> PB2 (MOSI) SDO <-> PB3 (MISO) Also GND and UIN are connected with the GND and 5V Pins of the Arduino board. Here is the self-written code I use #include <avr/io.h> #include <util/delay.h> #define sensor1 0 typedef int int10_t; int TBM(uint8_t high, uint8_t low) { int buffer = 0; if(high & (1<<7)) { uint8_t high_new = (high & 0x7F); buffer = (high_new<<2) | (low>>6); buffer = buffer - 512; } else buffer = (high<<2) | (low>>6); return buffer; } void InitSPI(void); void AccSensConfig(void); void WriteByteSPI(uint8_t addr, uint8_t Data, int sensor_select); uint8_t ReadByteSPI(int8_t addr, int sensor_select); void Read_all_acceleration(int10_t *acc_x, int10_t *acc_y, int10_t *acc_z, int sensor_select); int main(void) { int10_t S1_x_acc = 0, S1_y_acc = 0, S1_z_acc = 0; InitSPI(); AccSensConfig(); while(1) { Read_all_acceleration(&S1_x_acc, &S1_y_acc, &S1_z_acc, sensor1); } } void InitSPI(void) { DDRB |= (1<<DDB2)|(1<<DDB1)|(1<<DDB0); PORTB |= (1<<PB0); SPCR |= (1<<SPE); SPCR |= (1<<MSTR); SPCR |= (0<<SPR0) | (1<<SPR1); SPCR |= (1<<CPOL) | (1<<CPHA); } void AccSensConfig(void) { WriteByteSPI(0x0A, 0x02, sensor1); _delay_ms(100); WriteByteSPI(0x15,0x80,sensor1); //nur SPI4 einstellen } void WriteByteSPI(uint8_t addr, uint8_t Data, int sensor_select) { PORTB &= ~(1<<sensor_select); SPDR = addr; while(!(SPSR & (1<<SPIF))); SPDR = Data; while(!(SPSR & (1<<SPIF))); PORTB |= (1<<sensor_select); } uint8_t ReadByteSPI(int8_t addr, int sensor_select) { int8_t dummy = 0xAA; PORTB &= ~(1<<sensor_select); SPDR = addr; while(!(SPSR & (1<<SPIF))); SPDR = dummy; while(!(SPSR & (1<<SPIF))); PORTB |= (1<<sensor_select); addr=SPDR; return addr; } void Read_all_acceleration(int10_t *acc_x, int10_t *acc_y, int10_t *acc_z, int sensor_select) { uint8_t addr = 0x82; uint8_t dummy = 0xAA; uint8_t high = 0; uint8_t low = 0; PORTB &= ~(1<<sensor_select); SPDR = addr; while(!(SPSR & (1<<SPIF))); SPDR = dummy; while(!(SPSR & (1<<SPIF))); low = SPDR; SPDR = dummy; while(!(SPSR & (1<<SPIF))); high = SPDR; *acc_x = TBM(high, low); SPDR = dummy; while(!(SPSR & (1<<SPIF))); low = SPDR; SPDR = dummy; while(!(SPSR & (1<<SPIF))); high = SPDR; *acc_y = TBM(high, low); SPDR = dummy; while(!(SPSR & (1<<SPIF))); low = SPDR; SPDR = dummy; while(!(SPSR & (1<<SPIF))); high = SPDR; *acc_z = TBM(high, low); PORTB |= (1<<sensor_select); } And now here is what really confuses me. I got 5 of this sensors. One is working with this code perfectly fine. The Data I get is what I expect. I measure earth gravity in z-component if Iay the sensor on the table, if I start turning it I measure the earth gravity component wise in x-, y- and z- direction depending on the angle I turn the sensor. From the other 4 sensors I receive data that is different. The values jump from -314 (about -1.2 g) to +160 (about 0.5g). With the same code, the same wires and the same Arduino. I checked the register settings of all sensors, they are all the same. I checked the wire connection to the first component at the sensors, they are all around 0.3 Ohm. I used an Oscilloscope and made sure CSB, SCK and MOSI work properly. Am I missing something? What causes this similar but wrong behavior of 4 out of 5 sensors?
I am trying to use 2x UARTs with ChibiOS on the STM32F072RB Nucleo Board. I initialized UART2 but I am still getting output on UART1 pins, which is totally weird. #include "ch.h" #include "hal.h" /* * UART driver configuration structure. */ static UARTConfig uart_cfg_1 = { NULL, //txend1, NULL, //txend2, NULL, //rxend, NULL, //rxchar, NULL, //rxerr, 800000, 0, 0, //USART_CR2_LINEN, 0 }; static UARTConfig uart_cfg_2 = { NULL, //txend1, NULL, //txend2, NULL, //rxend, NULL, //rxchar, NULL, //rxerr, 800000, 0, 0, 0 }; /* * Application entry point. */ int main(void) { /* * System initializations. * - HAL initialization, this also initializes the configured device drivers * and performs the board-specific initializations. * - Kernel initialization, the main() function becomes a thread and the * RTOS is active. */ halInit(); chSysInit(); /* * Activates the serial driver 1, PA9 and PA10 are routed to USART1. */ //uartStart(&UARTD1, &uart_cfg_1); uartStart(&UARTD2, &uart_cfg_2); palSetPadMode(GPIOA, 9, PAL_MODE_ALTERNATE(1)); // USART1 TX. palSetPadMode(GPIOA, 10, PAL_MODE_ALTERNATE(1)); // USART1 RX. palSetPadMode(GPIOA, 2, PAL_MODE_ALTERNATE(1)); // USART2 TX. palSetPadMode(GPIOA, 3, PAL_MODE_ALTERNATE(1)); // USART2 RX. /* * Starts the transmission, it will be handled entirely in background. */ //uartStartSend(&UARTD1, 13, "Starting...\r\n"); uartStartSend(&UARTD2, 13, "Starting...\r\n"); /* * Normal main() thread activity, in this demo it does nothing. */ while (true) { chThdSleepMilliseconds(500); uartStartSend(&UARTD2, 7, "Soom!\r\n"); //uartStartSend(&UARTD1, 7, "Boom!\r\n"); } } The line uartStartSend(&UARTD2, 7, "Soom!\r\n"); gives output on UART1. Is there anything else I need to do? mcuconfig.h reads #define STM32_UART_USE_USART1 TRUE #define STM32_UART_USE_USART2 TRUE #define STM32_UART_USART1_IRQ_PRIORITY 3 #define STM32_UART_USART2_IRQ_PRIORITY 3 #define STM32_UART_USART1_DMA_PRIORITY 0 #define STM32_UART_USART2_DMA_PRIORITY 0
How does one implement virtual model (continuous) while control system itself is discrete (PLC)? I've done this in practice but what about theory, how does one explain this topic to a stranger? (lets say myself)
If there are input and the sensor measured outputs. What are the objective methods to compare performance besides looking at inputs and outputs matching or not?
I'm a student who is doing electrical and electronics engineering. I'm currently doing my final project which is a quadcopter. One of my objectives in that is to make a Electronic Speed Controller (ESC) for the brushless motors that are being used. I made a design for the ESC using proteus and I made the PCB also. I have attached the schematic. I used PIC16F628A for the ESC and wrote a small code in mikroC for the ESC to work when powered up. Unfortunately it didn't work properly. I tried sensorless control of brushless motors without getting any feedback. Can I know how much of current that I should provide for the motor? According to some articles that I read the brushless DC (BLDC) motor requires around 10A at the startup for around 20 ms. I have posted the code also. I used two codes to run the motor. One with PWM and other without PWM (100% duty cycle). I am a rookie to the subject of BLDC motor controlling. I am very grateful if anybody can help me to clear out the doubts and figure out the mistakes in my design to make it work properly. Below given is the code that I tried. Please help me to figure out the right way to program the chip. const delay = 7000; void main() { TRISB = 0x00; PORTB = 0x00; while(1) { PORTB = 0x24; delay_us(delay); PORTB = 0x36; delay_us(delay); PORTB = 0x12; delay_us(delay); PORTB = 0x1B; delay_us(delay); PORTB = 0x09; delay_us(delay); PORTB = 0x2D; delay_us(delay); } } When I uploaded the above given code and when I set the delay to around 3000 μs, the motor spun but at each time one of the MOSFETs got heated up until I cannot touch it anymore. Here is the video of this scenario. This is the other code (PWM); const delay1 = 2000; const delay2 = 1000; int count = 0; int cnt; int arr[6] = {0x24, 0x36, 0x12, 0x1B, 0x09, 0x2D}; int i = 0; int x = 0x32; void init(void) { TRISB = 0x00; PORTB = 0x00; //OPTION_REG = 0x87; //INTCON = 0xA0; CCP1CON = 0; CMCON = 0x07; } void main() { init(); while(1){ for (cnt = 0; cnt < 10; cnt++) { PORTB = arr[i]; delay_us(2); PORTB = 0x07; delay_us(2); } i++; if (i == 6) { i =0; } }; }
I'm new to the robotics and electronics world, but I'm willing to dive into it. I'm a software developer and I want to create a project that uses GPS and Accelerometer data to show as a layer on Google Maps after transferred to PC. My doubt is about which controller to get. In my country, there are generic controllers based on the Atmega328 that are being sold with a massive difference of price from the original Arduino (talking about the UNO model). Should I start with an original model? Should I expect to break the controller, fry it, or break any components by connecting them wrong? Would the experience with a generic controller be less exciting than with the original Arduino one?
I have a servo motor with quad optical encoder and I'm trying to control its position and velocity. By controlling both I meant that if I input that the motor should reach 90° at 200rpm then it should. How can I do that? I am using an Arduino Uno. Kindly share some code if possible. Though I have implemented the PID, I don't think it is correct because I didn't implement the feedforward controller (because I have no idea what that is) and I have not been able to find suitable gains for PID. The gains I find for small steps (or say degree rotation) do not work out well for large steps and vice versa. I have also not used a limit for integral sum (because I don't how much it should be). I am using a Pittman motor.
I have some robot software I'm working on (Java on Android) which needs to store a pre-designed map of a playing field to be able to navigate around. The field's not got any fancy 3d structure, the map can be 2d. I've been trying to find a good format to store the maps in. I've looked into SVGs and DXFs, but neither one is really designed for the purpose. Is there any file format specifically designed for small, geometric, robotics-oriented maps? The field I'd be modelling is this one:
POMDPs extend MDPs by conceiling state and adding an observation model. A POMDP controller processes either action/observation histories or a bayesian belief state, computed from the observations (belief-MDP transformation) In a complex, real-world system like a robot, one usually preprocesses sensory readings using filters (Kalmann, HMM, whatever). The result of which is a belief-state. I am looking for publications that discuss the problem of fitting a (probably more abstract) POMDP model on top of an existing filter-bank. Do you have to stick to the belief-MDP, and hand over the filtered belief-state to the controller? Is there any way of using history-based POMDP controllers, like MCTS? How do you construct/find the abstract observations you need to formulate the POMDP model?
I'm searching filter to reduce noise and smooth the signal while dead reckoning with an IMU (6dof gyro+accelerometer). What are the differences/advantages/disadvantages of the following filters: Kalman Complementary moving average Mahony I applied kalman and complementary filters to an IMU and both of them gives time lag to actions with respect to filter parameters. Also kalman filter works slower than moving average and complementary. How can I choose right filter and filter parameters?
I am trying to calibrate a monocular camera using ROS with the help of this website: How to Calibrate a Monocular Camera. When I run rostopic list, I get: /left /right /rosout /rosout_agg /usb_cam/image When I run rosservice list, I get: /cameracalibrator/get_loggers /cameracalibrator/set_logger_level /rosout/get_loggers /rosout/set_logger_level Finally, when I run: rosrun camera_calibration cameracalibrator.py --size 10x7 --square 0.025 image:=/usb_cam/image camera:=/usb_cam It says: ('Waiting for service', '/usb_cam/set_camera_info', '...') Service not found I even added the parameter at the end, --no-service-check, but that just makes the terminal stall indefinitely. Could someone please help me figure out what is going wrong and how I can fix it? Also if it is important, usb_cam is saved at catkin_ws/src/usb_cam.
Can someone please provide me with a list of sensors on the create 2? I am hoping to get one soon, but want to be sure it has ultrasonic sensors and not just bump sensors before I do.
I have a Robotic arm mounted on a car. There's a camera attached to it. Suppose the camera takes the image of a room, and finds that there's something, say an object, that has to be picked up. Say it's 50 feet away from the robot. My question is that how will the robot reach the object in the first place, and secondly, when it has reached the object, how will it know the real world co-ordinates of the object, to pick the object up, using inverse kinematic equations. Any help would be appreciated. Thanks
I want to use IR sensors to detect whether my dustbin is full but I want to protect it from outside dust. I am planning to use the IR sensors on Roomba. How are they working despite a plastic wall? Also, what is the range of the sensor? Can they detect obstacle at about 25 cm? Why is there a wall between the IR sensors? Is there a reason they are positioned at certain angle?
I am trying to run the cameracalibrator.launch using PTAM according to this Camera Calibration tutorial. However, when I do so, I get the following error: ERROR: cannot launch node of type [ptam/cameracalibrator]: can't locate node [cameracalibrator] in package [ptam] I source my devel/setup.bash before I run the code as well and it still does not work. Here is my launch file: <launch> <node name="cameracalibrator" pkg="ptam" type="cameracalibrator" clear_params="true" output="screen"> <remap from="image_raw" to usb_cam/image_raw" /> <remap from="pose" to="pose"/> <rosparam file="$(find ptam)/PtamFixParams.yaml"/> </node> </launch> Here is what I get for rostopic list: /rosout /rosout_agg /svo/dense_input /svo/image /svo/image/compressed /svo/image/compressed/parameter_descriptions ... /tf /usb_cam/camera_info /usb_cam/image_raw /usb_cam/image_raw/compressed ... /usb_cam/image_raw/theora /usb_cam/image_raw/parameter_descriptions /usb_cam/image_raw/parameter_updates The path where the cameracalibration.launch file is catkin_ws/src/ethzasl_ptam/ptam/launch. I am not sure why this error keeps coming up because when I run roslaunch ptam cameracalibrator.launch, it says: NODES / cameracalibrator (ptam/cameracalibrator) So I'm thinking that PTAM does include cameracalibrator. If someone could please point out my error, that would be really helpful. I've been using this post as a guide, but it's not been helping me much: Ros Dynamic Config file. As it says in the above link, I tried find . -executable and I could not find cameracalibrator. I could only find the below. How do I proceed? ./include ./include/ptam ./cfg ... ./launch ./src ./src/ptam ./src/ptam/cfg ...
I have a mobile robot which is navigating around a room, I already have the map of the room. I am using the navigation_stack of ROS. I am using rotary encoders for odometry. I am fusing the data from Rotary encoders and IMU using robot_pose_ekf. I am using amcl for localization and move_base for planning. Now, I have to write a Complete coverage Path planning algorithm and I am following this paper and I would like to ask what is the best way to generate the Boustrophedon path (simple forward and backward motions) in a cell (can be rectangular, trapezium, etc.) with no obstacles? I read a paper where they use different templates and combine them in a certain way to come up with the Boustrophedon path. Is there any other way by which we can generate the boustrophedon path? If someone can suggest how to implement it in ROS, that will be great. Please let me know if you need more information from me. Any help will be appreciated.
What will be the specifications of motors and propellers that can approx produce a thrust of 100kg in a quadcopter? We are planning to lift a total weight of 50 Kg along with the 20 Kg weight of the quadcopter itself. So at 50% throttle the total thrust produced by it should be 150 Kg with a per motor total thrust of 37.5 kg. I have looked at this answer to How to calculate quadcopter lift capabilities? but don't understand how to use this information to work out the specifications of motor and propeller required for my application. The answer given in previous question is limited for small quad & I require the specifications of BLDC motor such as Kv,torque,Imax,V,Power,etc and of Propeller suitable for such motor.
I would like to find an electronic actuator that mimics the characteristics of a hydraulic actuator, in that the position remains fixed without power drain when the actuator is not moving. Which actuators exist that match these criteria?
I'm sorry for this question that might not fit in here however, I would like to give it a shot. I've chosen this stack since the question is somehow related to mobile robots. I've came across a paper in Mobile Robot Localization that has cited the following reference, C. Brown, H. Durrant-Whyte, J. Leonard, B. Rao, and B. Steer. Kalman filter algorithms, applications, and utilities. Technical Report OUEL-1765/89, Oxford U. Robotics Research Group, 1989. I couldn't find this reference. Nothing show up in Google not even in Google Scholar. In my university which allows me to access to a massive database, also nothing show up. Since this is a technical report, I'm interested to read it to have more appreciation about Kalman Filter. Has anyone came across this reference?
From a technical standpoint what are the differences between the Kinect v1 and the Kinect v2 ? I'm interested both in the hardware equipment and the format of the data.
As I'm advancing in my project I realized I need better hardware, particularly for video input and processing. From an intuitive feeling sounds like stereo cameras offers a more powerful and flexible solution, on the other hand the Kinect looks like a great out-of-the-box solution for depth sensing and it also takes away a lot of computational complexity as it output directly the depth. So I would like to know what are the upsides and downsides of the 2 solutions and if they have any well known limitation and/or field of application and why. Thank you
I plan to use the icreate as a platform to carry a tablet, or notebook PC and want to have power for some time so I need more than the 3000 mAh battery. I want all to be powered from same battery system and use same charging source. So I need info as to how to wire in additional 14.4V NiMH batteries in parallel with the existing and how to deal with the additional temperature sensors (I could ignore of course but...). Can the built in power control deal with this? Do I need to upgrade it somehow? I would appreciate suggestions as I do not want a completely separate power system for aux devices. Charging all from standard home base is the goal even though it will take longer. I can deal with adapting the 14.4V to whatever aux devices I add. Thanks.
What's least complex way to reduce power from a 10V 1.5A battery to 6V 1.5A Thank you!
I am working on a project that requires motion detection and positioning. I've worked substantially with a camera but the issue with this is that I need something sleek, small and not heavy at all. Cameras also tend to rely on luminosity and they don't work well in poorly lit spaces. I need someone who's worked on something like this or who knows the best sensor for this purpose.
I have a 2 DOF Robot Arm with a camera attached to it. It takes an Image and there's an object in that image, say a glass. Of course, in order to move the arm to the required position to grasp the object, I have to solve the inverse kinematic equations. In order to solve them, I need the x and y, the coordinates where the arm has to reach to grasp the object. My question is how can I find the x and y of say the midpoint of the object from the image. Thanks
In the scope of my PhD, I would like to build an automated microscopy set-up that should take images of a sample of 2cm by 2cm. This should be done by taking pictures of 500 micrometers by 500 micrometers. Therefore I need to design an XY-stage moving my sample over the optical setup. I would use a Raspberry Pi to steer all the hardware. Could you direct me to material about how to best make an XY-stage ? My questions are about what types of motors to use (stepper?), how many, how to create a good sliding mechanism to avoid jerky steps, etc. Simple links to basic engineering of such set-ups would be more than enough for me to start, as I am a complete layman in this field. EDIT: I have found this blogpost. It does what I require, if I get small enough angle step stepper motors. EDIT2: I need a maximal range of motion of 10 cm in both directions. The overall size should not exceed 30x30 cm^2. Step sizes should not exceed 10 microns. I do not care about moving speed. Based upon the design in the link, buying a stepper motor with a 100:1 gear box could allow my very small radial steps (<0.05 deg) which would result in about 5 micron steps, assuming a rotor radius of about 1cm. As far as price goes, it should not exceed commercially available options which start at about 5k USD
Do you use simulators for developing your robot algorithms or do you test directly in your robot? I would like to get introduced into the simulators world, but don't know from where to start... can you recommend me one? Regards
A robotic arm should pick a cuboid up of a table, rotate it around its vertical axis and put it down on all possible positions. How many degrees of freedom are at least necessary? (All coordinates, that should be reached by the robotic arm, are in its workspace. It is not allowed to put the cuboid down and pick it up, once the robot has it ) The answer is 4 (3 translatory and 1 rotatory). But I don’t understand why. I thouhgt that it should be 3. 2 prismatic joints: 1 to pick the cuboid up, and another one to move it anywhere on the table. 1 revolute joint to rotate the cuboid around its vertical axis. => 2 translatory and 1 rotatory.
I measure the voltage ESC drawing while increasing the dc motor speed. Multimeter shows that as long as the speed increases the voltage value decreases. Can anybody explain why this is happening?
my robotic project is running at every 1ms and the processes are taking about 0.9ms. I am running PID so my max clock rate is 1kHz. About half of the processing time are taken by SPI peripherals, IMU and encoders. Is there any recommendation on how I can run faster PID sampling rate?
For my particle filter, I decided to try using the low variance resampling algorithm as suggested in Probabilistic Robotics. The algorithm implements systematic resampling while still considering relative particle weights. I implemented the algorithm in Matlab, almost word-for-word from the text: function [state] = lowVarianceRS(prev_state, weight, state_size) state = zeros(1,state_size); % Initialize empty final state r = rand; % Select random number between 0-1 w = weight(1); % Initial weight i = 1; j = 1; for m = 1:state_size U = r + (m - 1)/state_size; % Index of original sample + size^-1 while U > w % I'm not sure what this loop is doing i = i + 1; w = w + weight(i); end state(j) = prev_state(i); % Add selected sample to resampled array j = j + 1; end end As would be expected given the while loop structure, I am getting an error for accessing weight(i), where i exceeds the array dimensions. To solve this, I was considering circularly shifting my weight array (putting the first index used as the first value in weight, so that I never exceed matrix dimensions). However, I wasn't sure if this would negatively impact the rest of the algorithm, seeing as I'm having trouble understanding the purpose of the U calculation and while loop. Could anyone help clarify the purpose of U and the while loop, and whether or not a circular shift is an acceptable fix?
I need to make this construction (door is closed by default, door is opened to the top). This is the scheme: Red rectangle on the picture is the aperture, blue rectangle is the door (weight is about 0.5 kg), which moves top when door need to be opened. Green stripe on the picture is the rail for the door. Which electrical engine should I use? Estimated time of door opening is about 10 seconds, I want to send signal to up the door, it should be drop down when power is lost or I should send a signal to drop it down.
I'm looking for an algorithm for formationing multiple robots in 2D simulation. Can you suggest resources about this topic. Also I need suggestions and comments about these topics: Can I recruit algorithm from optimization algorithms like particle or ant? Is there any way except "go to goal" for each robot Is patter formationing algorithms feasible? Suggestions about a fast way of formationing/ aligning Notes: Im not using a robotics simulator or physics engine for this. Robots are represented as dots. multi robot system is homogeneous every robot can sense obstacles and other robots in a sense range circle around the robot. number of obstacles and robots can vary from 2 to 100 multi robot system is not a central
For a project I am building a Tele-Op Robot using the IRobot's Roomba as my drivetrain. In order for my robot to work, I need an extra castor. IRobot provides .stl and .stp files for me to use and I used them and printed the files. (The file I printed was from this link: Create® 2 Bin Modification. This file is a new part to the drivetrain to allow another caster. And I downloaded the first link called "Full bin bottom with caster mount" The piece was great but it made the castor a different height then the wheels. I was wondering if anyone had this file but saved as something different so I can edit it in preferably Solidworks. I was on the phone with IRobot for over 2 hours today and they told me to post here. So please help!!!! :)
I've been working lately on SLAM algorithms implementing extended kalman filtering to brush up on some localisation techniques and I have been thinking forward to the hardware side of things. Are there embedded chips such a microcontroller that are optimised for large linear algebra operations? What sort of embedded options are the best for processing these sorts of operations?
Reaction Control Systems (RCS) on these vehicles are implemented by using small rocket thrusters. For me it looks like these thrusters work in some kind of "pulse" mode. And I can't understand - do they use some optimal control to calculate in advance the required impulse to reach the new desired state of the system OR they use "pulse" mode just for precise magnitude variation of provided thrust (like average voltage in PWM(pulse-width modulation)) in a classic PID control loop?
I'm learning to make a 3D simulation in MATLAB based on a model designed from SOLIDWORKS. There is an example: SIMULINK+SOLIDWORKS The way used here is: Create a 3D model in SOLIDWORKS Create a xml file applicable to import to MATLAB via SimMechanics Link Import the model to MATLAB/SIMULINK. A simulink system is created. After these steps, controlling the system will be implemented in SIMULINK. But I feel simulink is kind of strict to control. I want to be more flexible, apply any algorithm to the model. And using matlab *.m file to control is more efficient way. So my question is this: Is there any way to do 3D simulation (MATLAB+SOLIDWORKS) by using only *.m file to control, no SIMULINK anymore? All model information will be contained in the *m.file. Maybe the step 1 and 2 are inherited, but step 3 is different.
How do I find out around which axis the coordinate system has to rotate, if the rotation matrix is given? $ {^{a}R_{b} } $ = $ \left(\begin{matrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \\\end{matrix}\right)$ $ {^{a}R_{c} } $ = $ \left(\begin{matrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\\end{matrix}\right)$ For $ {^{a}R_{b} } $ I thought, that it has to be a rotation around the z-axis, because $R(z,\theta) = \left(\begin{matrix} cos(\theta) & -sin(\theta) & 0 \\ sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 1 \\\end{matrix}\right)$ the values at the positions $a_{13}, a_{23},a_{33},a_{32},a_{31}$ of $ {^{a}R_{b} } $ and $R(z,\theta)$ are identical. So I solved $cos(\theta) = 0$ =>$\theta = 90° $ => 90° rotation around z-axis. But how do I solve it, if there is more than 1 rotation, like for $ {^{a}R_{c} } $?
I am starting to develop robotics project which involves simulation (and maybe real world programs) of soft-body dynamics (for food processing) and clothes/garment handling (for textile industry or home service robots). It is known that soft-body dynamics and garment handling are two less explored areas of robotics and simulation, therefore I hope to make some development of (contribution to) projects that are involved. The following projects are involed: Bullet physics engine - for dynamics Gazebo - simulation environment ROS - robot OS, I hope to use Universal Robot UR5 or UR10 arms and some grippers (not decided yet) Orocos - for control algorithms Initially I hope use "ROS INDIGO IGLOO PREINSTALLED VIRTUAL MACHINE" (from nootrix.com), but apparently I will have to make updates to the Bullet, Gazeboo, add new ROS stacks and so on. The question is - how to organize such project? E.g. If I am updating Bullet physics engine with the new soft-body dynamics algorithm then what executable (so) files should I produce and where to put them into virtual machine? The similar question can be asked if I need to update Gazebo. There seems to be incredibly large number of files. Is it right to change only some of them. Sorry about such questions, but the sofware stack seems to be more complex than the robotics itself.
Vision is important part of robotics and frequently it is unavoidable component of control loop. E.g. many clothes/garment handling algorithms rely on visual cues in deciding how to proceed. The question is - does simulation environments (Gazebo or some others) allow one to design world with robot and garment and simulate not only garment dynamics but simulate also what robot sees, how robot perceives garment in each simulation step? If it is not possible to simulate vision then how to simulate algorithms with vision as component of control loop? Maybe simulation of vision can be good research theme? Are here some trend or good articles about it? Some initial projects that could be expanded? Actually - it can be stated as more general question - is it possible to simulate sensors in Gazebo? E.g. food handling (soft-body handling) can involve tactile sensors. In principle Gazebo can calculate deformation and forces of soft-body and format these data as the simulated values of sensor readings. Maybe similar mechanism can be used for simulation of vision as well?
I'd like to study the capabilities of industrial robot arms. For example, to answer the question how does price vary with precision, speed, reach and strength? Is there a database of industrial robot arms including information like the price, precision, speed, reach and strength of each model?
Is there any way of estimation the battery life from pwm outputs which goes to motors in microcontroller level. I'm planning to estimate path range with this. Microcontroller, sensor and other electronic device should be neglected.
I am building a laser gun for pentathlon targets (also doing one). I would like to know how build a part of the gun and if I can count on a steady laser if it is attached to a motor. The question is about the laser. I want it maybe attached to a small-(servo)motor to try to implement some cheat just for fun. Assuming the motor has a good torque, can I assume that the laser will not move (not the sightliest bit) when the motor is turned off? (I don't have any to test) This is for precision shooting, so small vibrations and a moving pointer would be really prejudicial. In case it does, what can I do to minimize the problem? Is it all about ordering the motor with the highest torque? I also have a second question which is slightly off-topic, yet related, and robotics people usually have solutions for such problems. I also need to build the sights. Here's a gun: As you can tell, its sights are a fixed plastic point in the front, and an adjustable large back. There are two bolts, one on each side. One makes the sight higher or lower, and the other makes it point more to the right or left. How can such part be built with simple tools? Thanls
I'm working on an robot that would be able to navigate through a maze, avoid obstacles and identify some of the objects in it. I have a monochromatic bitmap of the maze, that is supposed to be used in the robot navigation. Up till now I have processed the bitmap image, and converted it into an adjacency list. I will now use the dijkstra's algorithm to plan the path. However the problem is that I have to extract the entrance point/node and exit node from the bmp image itself for dijkstra's algorithm to plan the path. The robots starting position will be slightly different (inch or two before the entrance point) from the entrance point of maze, and I am supposed to move to the entrance point using any "arbitrary method" and then apply dijkstra algorithm to plan path from maze's entrance to exit. On the way I have to also stop at the "X's" marked in the bmp file I have attached below. These X's are basically boxes in which I have to pot balls. I will plan the path from entrance point to exit point , and not from the entrance to 1st box, then to second, and then to the exit point; because I think the boxes will always be placed at the shortest path. Since the starting position is different from the entrance point, how will I match my robot's physical location with the coordinates in the program and move it accordingly. Even if the entrance position would have been same as starting position there may have been an error. How should I deal with it? Should I navigate only on the bases of the coordinates provided by dijkstra or use ultrasonics as well to prevent collisions? And if we yes, can you give me an idea how should I use the both (ultrasonics, and coordinates)?
We can easily compute the rigid robot kinematics and dynamics. There is many resources, simulators and modelling tools about it. But i couldnt find any of these for elastic robots. Can you suggest resources and modelling tools?
I am still in high school and am a part of the robotics club that competes in the FTC (First Tech Challenge). I am just about finishing my first Calculus class (Calc 1), and would be ecstatic to be able to apply this someway in a real world example such as robotics. [Besides PID. It seems like only approximations anyways] So far, I've only been working with "fabricated" math problems. Would deriving an equation from real life situations be too complicated? Thank you!
Until now I have been programming the robot using Java on KUKA's IDE "KUKA Sunrise.Workbench", what I want to do is control the robot arm via my C++.Net application (I would use a camera or Kinect to get commands). I'm reading the documents provided by Kuka, but as I'm a bit in hurry, I want to understand how a C++ client application (running on my laptop) can send/receive information to/from the robot's controller "KUKA Sunrise Cabinet" (running the server application) via FRI. I still have issues grasping the whole mechanism. A simple application (Server/Client) source code with explanation (or a schematic) would be more than helpful .
I am specifically interested in DH parameters versus other representations in terms of kinematic calibration. The best (clearest) source of information I could find on kinematic calibration is in the book "Robotics: Modelling, Planning and Control" by Bruno Siciliano, Lorenzo Sciavicco, Luigi Villani, Giuseppe Oriolo, chapter 2.11. Which requires a description of the arm in DH parameters, multiplying out the kinematics equation, partial differentiation w.r.t. each DH parameter, then a least-squares fit (with the left pseudo-inverse), then iterate. Is there some fundamental reason why DH parameters are used instead of a different representation (like xyz + euler angles). I understand that there are fewer parameters (4 versus 6 or more), but for a calibration procedure like this I will be taking much more data than unknowns anyway. All the robotics textbooks i have read just present DH parameters and say "this is what you should use", but don't really go into why. Presumably this argument can be found in the original paper by Denavit, but I can't track it down.
I have a Blade 180QX quadrotor / quadcopter and I had to move a red wire shoved under the circuit board when I fixed a broken power wire. Now the red wire shown out straight from the circuit board is not in the right configuration - or at least as it was. If I understood what it was for I might know how to place it. Is this a horizon sensor (temperature)? Ever since I had to move this wire, the quadrotor goes unstable when flying. The only appreciable change is the wire POSITION. The red wire was not attached anywhere else on the board. It was shoved under the circuit board inside the battery holder.