instruction
stringlengths
40
28.9k
My question is: what is the difference between DC motor with encoder and DC without encoder? As long as I can control the speed of DC motor using PWM, for example on the Arduino, what is the fundamental difference?
I completed the course of Aerial Robotics in Coursera and I want to implement what I learned in a real quadrotor. The thing is that when I see the equations given like this: For the sake of the argument let's assume I have implemented the PD Controllers and every moment I find u1 (the sum of the forces applied to the quadrotor) and u2 (the sum of the moments applied to the quadrotor). I then ask myself: How can I find what force and moment should specifically each one of the motors produce? And here I am stuck as I can't find an answer.Could anyone help?
Does anyone know how one can calculate the exact damping of KUKA arm during impedance control mode. According to their manual one can control only the Lehr's ratio. However, I want to know the exact value,say in Ns/m.
Is it possible to enable/disable the 4 Ioni drives mounted in Ionicube individually? It appears with IoniCube that enable/disable applies to all Ioni drives simultaneously. However, Ionicube 1X does appear to have enable/disable for individual Ioni drives. Can you confirm? I wish to control the enable/disable state of each Ioni drive separately. And if possible to do this on IoniCube. Most grateful for your help! Best wishes, Mark
I'm working on Graph SLAM to estimate robot poses (x, y, z, roll, pitch, yaw). Now I want to integrate GPS measurement (x, y, z, of course no angles). I implemented GPS as pose's prior. But I have a problem. Position(x, y, z) is perfectly corrected by graph optimization But orientaiton(roll, pitch, yaw) is very unpredictable(unstable) after optization. i.e. It looks like position is fitted by the sacrifice of orientation. I'm very confused about what's the right way of integrating GPS into graph SLAM. GPS should be handled as prior? or landmark? or one of pose-vertices? ...Thanks for your help in advance. PS I use g2o as a graph-optimization library. In g2o, I implemented GPS measurement with EdgeSE3_Prior. GPS's quality is RTK so it's enough precise.
%PREDICT Apply odometry model for differential drive robot. % [R,FXR,PATH] = PREDICT(R,ENC,PARAMS) % calculates the final pose and final pose covariance matrix given % a start pose, a start pose covariance matrix and the angular wheel % displacements in ENC. PARAMS contains the robot model and the % error growth coefficients. % % Input arguments: % R : differential drive robot object with start pose R.X, R.C % ENC : structure with fields % ENC.PARAMS.KL: error growth coefficient of left wheel % ENC.PARAMS.KR: error growth coefficient of right wheel % with unit in [1/m]. % ENC.STEPS(i).DATA1: angular displacements of left wheel % ENC.STEPS(i).DATA2: angular displacements of right wheel % in [rad] and monotonically increasing % PARAMS.B : wheelbase in [m]. Distance between the two wheel % contact points % PARAMS.RL: radius of left wheel in [m] % PARAMS.RR: radius of right wheel in [m] % % Output arguments: % R : differential drive robot object with final pose R.X, R.C % FXR : 3x3 process model Jacobian matrix linearized with % respect to XROUT % PATH : array of structure with fields PATH(i).X (3x1) and % PATH(i).C (3x3) which holds the poses and the pose % covariance matrices over the path % % The function implements an error model for differential drive % robots which models non-systematic odometry errors in the wheel % space and propagates them through the robot kinematics onto the % x,y,theta-pose level. % % Reference: % K.S. Chong, L. Kleeman, "Accurate Odometry and Error Modelling % for a Mobile Robot," IEEE International Conference on Robotics % and Automation, Albuquerque, USA, 1997. % % See also SLAM. % v.1.1, ~2000, Kai Arras, ASL-EPFL, Felix Wullschleger, IfR-ETHZ % v.1.2, 29.11.2003, Kai Arras, CAS-KTH: toolbox version function [r,Fxr,path] = predict(r,enc) When I try to run this complete code, it asks me to input values for R and Enc. I am not sure whether they are just one digit values or a matrix. Can anyone help me by looking at the description in the code, what kind of input is required.
PREFACE: THIS QUESTION REGARDS FTC ROBOTICS I am aware that a question similar to this exists on SE and I have looked at it, but the solutions provided did not solve my problem, and the nature of my question has additional constraints and criteria involved that make this a distinct problem. PROBLEM: I have been put in charge of creating java classes to add to an existing app, used to control a robot (FTC robotics, for those who are wondering). The features I have been given to add to the app include: Shape detection (provided by FTC, all I need to do is tweak some files) Automatically record and save a video in the background of the app using the back camera QUESTION: I was wondering, if this is even possible, how I would go about both recording video and using shape detection? How do I even record a video from the background of an app? If i cannot achieve both I was told to prioritize recording, as we have substitutes for shape detection. SPECS: Motorola moto g 2nd gen with Android Marshmallow Or as a fallback: ZTE speed with Android Marshmallow
BACKGROUND: I am creating a robot to score the most points in 30 seconds while running autonomously. Naturally, two thoughts come to mind: Linear programming and machine learning. Linear programming would provide a stable, simple method of scoring points. However, it is limited by what it can do, and optimizing scores would require reworking the entire bot. PROBLEMS: The robot and code itself can change, but it is not time efficient due to the fact that it would require completely reworking the code and robot by a single programmer. The robot has to work with what its got. QUESTION: Can I create Android Java classes to allow my robot to work the field as an AI and tweak its strategy or course of action based on stats from previous rounds? (Self-Supervised Learning) If it is possible to do, how would I do it? SPECS: Max robot size of 18'' cubes Android Marshmallow ZTE speed or Motorola moto g gen 2 phones Multiple inputs from a controlling phone and various sensors Output to multiple motors and servos
r = 1; w1 = 4; w2 = 2; l = 1; R = [0 -1 0; 1 0 0; 0 0 1]; X = [(r*w1)/2 + (r*w2)/2; 0 ; (r*w1)/(2*l) - (r*w2)/(2*l)]; A = R*X; disp(A) I am getting the solution for the matrix as [0; 3; 1] which is exactly what I expect. I would like to input a series of w1 and w2, Lets say I have a data file 1.xlsx and 2.xlsx with ten values each. I want to load 1.xlsx into w1 and 2.xlsx into w2 and get ten answers for X. How can I do that?
I need to find inverse and forward kinematics of Mitsubishi RV-M2 as a homewrok. I found the forward kinematics part. But I got stuck in inverse kinematics. Teacher said we can think that wrist joints is not moving.(To make equations easier I guess.) This is why I thought tetha4(in T4) and tetha5(in T5) should be 0. Here is my MATLABcode syms t1 t2 t3 d1 a1 a2 a3 d5 px py pz r1 r2 r3 r4 r5 r6 r7 r8 r9 % sybolic equations T1=[cos(t1) -sin(t1) 0 0; sin(t1) cos(t1) 0 0; 0 0 1 d1; 0 0 0 1;]; T2=[cos(t2) -sin(t2) 0 a1; 0 0 -1 0; sin(t2) cos(t2) 0 0; 0 0 0 1;]; T3=[cos(t3) -sin(t3) 0 a2; sin(t3) cos(t3) 0 0; 0 0 1 0; 0 0 0 1;]; T4= [0 -1 0 a3; 1 0 0 0; 0 0 1 0; 0 0 0 1;]; T5=[1 0 0 0; 0 0 -1 -d5; 0 1 0 0; 0 0 0 1;]; Tg= [r1 r2 r3 px; r4 r5 r6 py; r7 r8 r9 pz; 0 0 0 1;]; left= inv(T1)* Tg; left=left(1:4,4); left=simplify(left) right= T2*T3*T4*T5; right=right(1:4,4); right=simplify(right) This gives us I find t1 using this and results are matching with forward kinematics equations. But I couldn't find t2 and t3. How can I do that? Is there a formula or somethig?
I'm currently programming a C socket for communicating via (Ethernet) TCP/IP with an UR3 robot which control software is version 3.3. I'm able to get raw data from the ports 30001,30002, and 30003. I can deserialize the data thanks to the source of a ROS driver. But I wish to have more information about deserialization, and Universal Robots seems to not provide such information. Do anybody knows where to get info about the TCP packets that the robot sends?
I am currently working on state estimation/navigation for a system with multiple robots. As of now, what I have is each robot localizing itself with a Kalman filter, given vision based measurements. As next steps, I am aiming to do two things: Extend this filtering framework to span over all robots so that they can cooperate and improve each other's localization Along with the above, construct a path planning framework such that they can navigate in a way that their localization accuracy is always maximized, thereby eliminating the problem of losing position etc. To this end, I've been reading about multi-robot state estimation and planning strategies, and have come across belief space planning: or planning under uncertainty. While the math intuitively makes sense, I am having issues with how to implement these techniques in my real world scenario, especially for multiple robots. I have experience using algorithms such as EKF, UKF etc., and sampling based planning strategies like PRM/RRT, but I am having trouble with the probabilistic link between these two. So far, I've been looking into research papers, but as someone who's mainly a programmer, I'm trying find something more approachable that will help me link the (somewhat abstract) math to my specific problem: for instance, helping me define terms such as 'joint belief of the entire group', using the data I have in hand. What are my best options, and are there any better resources I can consult?
I have a controller and a plant in series. The controller is 3 input 3 output MIMO system and the plant is also 3 input 3 output system. The bode of the open loop gain, i.e., $$D(z)=C(z)*G(z)$$ appears to be different when using $$series(C(z),G(z))$$ versus $$ D(z) = C_1(z)G_1(z) + C_2(z)G_2(z) + C_3(z)G_3(z) \\ $$ Theoretically, I believe both are same. However, the latter method gives a different bode with undamped peak, unlike the one using the series command. The approach using $ D(z) = C_1(z)G_1(z) + C_2(z)G_2(z) + C_3(z)G_3(z) $ is most suitable, according to me, as it gives more clarity. Can anyone share idea on why this misbehaviour exists?
We are currently working on a project where we want to use EtherCAT as a communication protocol between a central system (master) and several nodes (slaves). We want these slaves to have the following: GPIO for sensors Local processing (microprocessor ~200MHz+) Programmable (e.g. through USB) EtherCAT connection We've looked a lot at off-the-shelf solutions and came to several EtherCAT modules which can handle the communication, such as the EasyCAT PRO, Anybus M40 and BECKHOFF F1111. We could connect these modules to an off-the-shelf microprocessor (e.g. Beaglebone) However, we are also looking into an integrated, more powerful solution because we do not believe this can handle everything we want to do. The TI AM3359 ICE suits our purposes and we have bought and tested it. However we were wondering if there are smaller, off-the-shelf solutions, since this one has a lot of stuff that we do not need (e.g. screen and CAN connection) and it requires making our own PCB. This is therefore my question. Do you guys know of any EtherCAT slave sensor node that can facilitate our needs and nothing more? We have been looking a lot but cannot find anything of this kind.
Is (G1(z)*C1(z))+(G2(z)*C2(z))+(G3(z)*C3(z)) the right way of computing open loop gain for the attached block diagram. The system order differs from the order achieved using series(G(z),C(z)). Could any one help?
I'm currently developing an EKF to estimate the position and orientation of a quadcopter. My state vector is comprised of 3D position, 3D velocity, 3 euler angles and the angular velocity vector. Right now I'm looking into the measurement equation for the accelerometer. If I understood correctly, an accelerometer measures "proper acceleration", instead of coordinate acceleration, that is, it measures the acceleration of the body w.r.t to a free-falling coordinate system. If this is the case, and supposing the only forces acting on the body are the upward thrust given by the propellers, $\vec{T}$, and earth's gravitational force, $m\vec{g}$, then I understand that the only acceleration that would be measured by the accelerometer is the one caused by $\vec{T}$ (since the free-falling frame has no way of measuring the acceleration caused by $m\vec{g}$, because it is also being accelerated by it). If this also the case, then I note that the vector $\vec{T}$, when expressed in the body coordinate frame (i.e. a coordinate frame fixed at the center of mass of the body, and always aligned with the body's orientation) does not depend on any of the states whatsoever. For example, if the propellers are assembled such that $\vec{T}$ is always perpendicular to the plane where the propellers are, then $\vec{T}$ in the body frame is specified as $(0,0,\alpha)^T$, where $\alpha$ is the magnitude of the thrust given. Which leads me to conclude that (since the measured acceleration doesn't depend on the states) I can't use accelerometer measurements to obtain more information about any of my states (??). This conclusion seems paradoxical to me, and that's why I ask this here. Could someone please point the mistake in my reasoning, or elucidate why this is not a paradox?
I am trying to implement 'belief space' planning for a robot that has a camera as its main sensor. Similar to SLAM, the robot has a map of 3D points, and it localizes by performing 2D-3D matching with the environment at every step. For the purpose of this question, I am assuming the map does not change. As part of belief space planning, I want to plan paths for the robot that take it from start to goal, but in a way that its localization accuracy is always maximized. Hence, I would have to sample possible states of the robot, without actually moving there, and the observations the robot would make if it were at those states, which together (correct me if I am wrong) form the 'belief' of the robot, subsequently encoding its localization uncertainty at those points. And then my planner would try to connect the nodes which give me the least uncertainty (covariance). As my localization uncertainty for this camera-based robot depends entirely on things like how many feature points are visible from a given locations, the heading angle of the robot etc.: I need an estimate of how 'bad' my localization at a certain sample would be, to determine if I should discard it. To get there, how do I define the measurement model for this, would it be the camera's measurement model or would it be something relating to the position of the robot? How do I 'guess' my measurements beforehand, and how do I compute the covariance of the robot through those guessed measurements? EDIT: The main reference for me is the idea of Rapidly exploring Random Belief Trees, which is an extension of the method Belief Road Maps. Another relevant paper uses RRBTs for constrained planning. In this paper, states are sampled similar to conventional RRTs, represented as vertices as a graph, but when the vertices are to be connected, the algorithm propagates the belief from the current vertex to the new, (PROPAGATE function in section V of 1), and here is where I am stuck: I don't fully understand how I can propagate the belief along an edge without actually traversing it and obtaining new measurements, thereby new covariances from the localization. The RRBT paper says "the covariance prediction and cost expectation equations are implemented in the PROPAGATE function": but if only the prediction is used, how does it know, say, whether there are enough features at the future position that could enhance/degrade the localization accuracy?
Could anyone tell what are the sensors used in a digital pen which or specifically Equil smart pen and smart marker which can track hand writings. Is it MEMS based??If Yes, Is it MEMS accelerometer only or a combination of MEMS sensors like gyroscope and magnetometer and acceleometer? What algorithms are used here?
I have a project where I have to move a Mitsubishi Melfa RV-2F-Q robot to a position/orientation from an external source, so there are no pre-defined points available. The problem I keep running into is that even if I give it a reachable position (within operating range), or even a position that is almost the same as its current one, it fails to move there with the error: L2802 Illegal position data (dstn) What causes this and how do I avoid it?
I'm trying to make a model for a robotic arm on simulink and then excute it on arduino controller but i want also to use robotics toolbox to calculate forward and inverse kinematics so i need to add matlab function in my simulink model i tried it but it keeps giving errors but when i try it away from simulink it gives rsults ! is that because robotics toolbox works only with matlab but not with simulink? the code in matlab function function [theta1, theta2, theta3, theta4] = fcn(x, z, y) L(1)=Link([0,0.03,0,pi/2,0]); L(2)=Link([0,0,0.12,0,0]); L(3)=Link([0,0,0.1,-pi/2,0]); L(4)=Link([0,0,0,0,0]); robot = SerialLink(L,'name','robot'); robot.tool=transl(0,0,.08); robot.base=transl(0,0,.07); T=transl(x,y,z); q=robot.ikine(T,[0,0,0,0], [1 1 1 0 0 1]); q(1)=fix(q(1)*(180/pi)); q(2)=fix(q(2)*(180/pi)); q(3)=fix(q(3)*(180/pi)); q(4)=fix(q(4)*(180/pi)); theta1=q(1) theta2=q(2) theta3=q(3) theta4=q(4) but when i try it in command window away from simulink it gives results for x=1 y=1 z=1 and higher values Warning: solution diverging at step 245, try reducing alpha > In SerialLink/ikine (line 260) Warning: solution diverging at step 459, try reducing alpha > In SerialLink/ikine (line 260) Warning: solution diverging at step 533, try reducing alpha > In SerialLink/ikine (line 260) Warning: ikine: iteration limit 1000 exceeded (row 1), final err 1.790820 > In SerialLink/ikine (line 179) theta1 = -21996 theta2 = -1050694 theta3 = 2711696 theta4 = 2848957 but for x=0.2 y=0.2 z=0.2 it gives that error . . . . > In SerialLink/ikine (line 260) Warning: solution diverging at step 567, try reducing alpha > In SerialLink/ikine (line 260) Warning: solution diverging at step 570, try reducing alpha > In SerialLink/ikine (line 260) Warning: solution diverging at step 574, try reducing alpha > In SerialLink/ikine (line 260) Warning: solution diverging at step 578, try reducing alpha > In SerialLink/ikine (line 260) Error using tr2angvec (line 97) matrix not orthonormal rotation matrix Error in SerialLink/ikine (line 191) [th,n] = tr2angvec(Rq'*t2r(T));
As I know, most of robotic arms are specific-purpose and usually work under supervision of an expert, such as surgery robot. So, is this relevant for a robotic arm to be intelligent and autonomous? If so, how its control system may be different from non-intelligent ones?
I want a comparison between human hands and robot hands, with respect to grasping and squeezing objects. How can I perform such a comparison, using sensors, as one would with a benchmarking database? Is there a standard, or ready-to-use, system?
I know this probably has been asked a thousand times but I'm trying to integrate a GPS + Imu (which has a gyro, acc, and magnetometer) with an Extended kalman filter to get a better localization in my next step. I'm using a global frame of localization, mainly Latitude and Longitude. I kinda 'get' the kalman equations but I'm struggling in what should be my actual state and what should be my sensor prediction. I have in one hand the latitude and longitude of my GPS and in euler degrees the roll, pitch and yaw of my IMU (which is already fused by some algorithm on board the chip I think) in euler degrees. I think I can throw the pitch and roll away. And I know I have to show a function for my state $$ x_t = f (x_{t-1}, \mu_{t-1}) $$ and a function that predicts what my sensor is seing at the step $t$ $$ \mu_{t} = h(x_t)$$ The thing is I dont know on what these functions should depend on, should my state be about the gps? In that case how can I predcit the next yaw read since I don't think I can get the rotation from a difference from gps location. On the other side if my state is the yaw, I need some kind of speed, which the GPS is giving me, in that case would kalman work? Since I'm using the speed from the GPS to predict the next GPS location. Long story short I dont know what my state and sensor prediction should be in this case. Thanks In advance. Edit: I have an ackerman steering mobile robot with no encoders which has mounted a GPS and and IMU (gyr acc and mag). The imu fuses thes values into euler degrees and the GPS gives me lat and longitude.
I'm trying to write an inverse kinematics Matlab code for a 6 DOF robotic arm that has the following link parameters: Twist angle (alpha): [-90, 0, 90, -90, 90, 0] Link length (a): [0, 0.5, 0, 0, 0, 0] Offset distance (d): [0, 0.25, 0, 1, 0, 0.5] and Px, Py, Pz are the following [1,1,0] I'm using the following equations for theta 1,2 and 3 values (closed form solution): As seen in the equations theta 1 and 2 have 2 two roots (2 possible solutions) thus, the robot has eight groups of inverse kinematics solutions. How do I modify my code to select the ideal solution for theta ? %Theta 1 theta1 = (atan2(real(py),real(px)))-atan2(real(d2),real(sign1*sqrt(px^2+py^2-d2^2))); c1 = cos(theta1); s1 = sin(theta1); %Theta 2 A = (c1*px)+(s1*py); B = (A^2+pz^2+a2^2-d4^2)/(2*a2); theta2 = (atan2(real(A),real(pz)))-atan2(real(B),real(sign2*sqrt(A^2+pz^2-B^2))); c2 = cos(theta2); s2 = sin(theta2); %Theta 3 A1 = (c2*px)+(s2*py); theta3 = (atan2(real(A1-(a2*c2)),real(pz+(a2*s2)))) - theta2; c3 = cos(theta3); s3 = sin(theta3);
With a group of students we are building an exoskeleton for paraplegics. I am part of the Software & Control department and we are designing the motor controller configuration. We have several requirements: Size is one of our main requirements. We want the Exoskeleton to be as slim as possible, so this is true for all the components as well. We want the motor controller to be as small as possible. The motor controller has to work with a brushless DC motor A setpoint will be sent from a microcontroller to the motor controller Two or three absolute encoders need to be connected, this depends on the design of the joint. **We are either going to use position or torque control We need one encoder for the motor angle, and one for the joint angle. We might implement Series Elastic Actuation in our joint and then we want to be able to measure the deflection in the spring and thus need two encoders for that. A continuous power output in the range of 700 - 1200 Watt While exploring several off-the-shelf possibilities, we came accross the Elmo Gold Twitter and the IOMI Pro. One of the problems of these boards, is the amount of absolute encoders that can be connected. Both the Elmo and the IOMI board can either have 1 or no absolute encoders connected. We came op with a solution, so that we are able to connect multiple encoders. In our exoskeleton we are going to use an EtherCAT Master-Slave configuration. The High-Level control (e.g. state-machine) is on the EtherCAT Master and sends joint angle setpoints. Our idea is to use a EtherCAT Slave as a sort of second controller, which gets the joint setpoint and the joint encoder values, calculates the motor setpoint and sends this out to a certain off-the-shelf controller like the Elmo Gold Twitter or the IOMI Pro. My question is the following: Is this even a good solution? And what are other solution to this problem? Are there even better alternatives for a motor controller? Might it be a better idea to build and program your own motor controller? (please bare in mind that we have limited experience in that area) I thank you all in advance for your reply! Cheers! Nathan
I'm working on diy flight controller for quadcopter. I have a question for which I can't find a good answer. So perhaps you could help me. I'm using a cascaded PID controller for Pitch and Roll regulation. First there is a stabilize PID and rate PID. For the first (stab.) you input desired angle from transmitter and actual angle from IMU. then this output is feed into rate controler. From there it goes to the motors. In code I'm pooling a function with "data is ready to read from IMU" which happens every 1ms. In this function I'm calculating one regulator and writing to motors. Loop time when this condition is not true is way lower then that. So one regulator should be inside this slow loop and one outside. So which one should be fast and which one slow? In my understanding, stabilize PID should be the fastest? Is that correct? Also should both regulators be PID? Thanks for your help!
I'm interested in exoskeletons and wearable rehabilitation robotics. I wonder how we can estimate/predict the intention of human body/part motion. I want to prevent the exoskeleton from interfering with human movements. Intention reading is the process of predicting how the movement will take place and how it will happen at the beginning of any movement. There is an exoskeleton example (https://www.youtube.com/watch?v=BdoblvmTixA) which detects muscle activation with EMG and generates artificial muscle attraction. But this is only a open and close action. And it will begins after the movement. Also EMG system has a lot of disadvantage like sliding probes, affecting from other/crossing muscles. I want to estimate every motion like turn,twist, amount of contraction. I'm open to suggestions or issues (The troubles you have experienced.) This matlab webinar which is about "Signal Processing and Machine Learning Techniques for Sensor Data Analytics" shows how to classify different actions. But this example predicts kind of motion after the motion completed. I need to know motion information at very first. I want know how can I estimate different motions at the beginning of limb action. Which system (EMG,EEG,IMU,etc.) and processing technique will be better or which combination should I use.
I was asked this in a phone interview for a robotics job. Googling has not really helped. I assume it is some sort of state prediction model that can be used in a Kalman filter. Can anyone give me a formal description? A link to a reference would also be nice. EDIT to clarify, the interview was for a self-driving car company and before the question we had been discussing Kalman filters, Particle filters, and path planning algorithms (A*).
I'm planning to build a Balance Bot and I would like to know which design should I go with. There's a vertical design such as the ArduRoller and on the other hand, there's the typical stacked type of deign. I have attached images below for both of them. Although both of these are based on the principle of Inverted Pendulum, how do these differ from each other in terms of stability and response (assuming both the designs are of the same size)? Is the mass distribution in the vertical design better? Also, is there a difference in programming them?
I recently read a paper titled Finding Locally Optimal, Collision-Free Trajectories with Sequential Convex Optimization by John Schulman, Jonathan Ho, Alex Lee, Ibrahim Awwal, Henry Bradlow and Pieter Abbeel. The authors mention that the end-effector final pose constraint can be readily incorporated in the planning scheme which is based on solving an unconstrained optimization with the equality and inequality constraints added in penalty function. Let $F_{targ} \in $ SE(3) be the desired pose and $F_{cur}(\theta)$ be the current pose, then the pose error is given as $F^{-1}_{targ}F_{cur}(\theta)$. However, I am wondering that if we plan motion in the joint space, then how can this error be incorporated in the objective function as a penalty term?,since $F_{cur}(\theta)$ is a highly nonlinear, $nonconvex$ forward kinematics map, are we linearizing the forward kinematics map to make it convex and add it in the penalty formulation?
I want to do forward dynamics but before that I got struck in inverse kinematics for 4 dof. My code is given below: preach = [0.2, 0.2, 0.3]; % create links using D-H parameters L(1) = Link([ 0 0 0 pi/2 0], 'standard'); L(2) = Link([ 0 .15005 .4318 0 0], 'standard'); L(3) = Link([0 .0203 0 -pi/2 0], 'standard'); L(4) = Link([0 .4318 0 pi/2 0], 'standard'); %define link mass L(1).m = 4.43; L(2).m = 10.2; L(3).m = 4.8; L(4).m = 1.18; %define center of gravity L(1).r = [ 0 0 -0.08]; L(2).r = [ -0.216 0 0.026]; L(3).r = [ 0 0 0.216]; L(4).r = [ 0 0.02 0]; %define link inertial as a 6-element vector %interpreted in the order of [Ixx Iyy Izz Ixy Iyz Ixz] L(1).I = [ 0.195 0.195 0.026 0 0 0]; L(2).I = [ 0.588 1.886 1.470 0 0 0]; L(3).I = [ 0.324 0.324 0.017 0 0 0]; L(4).I = [ 3.83e-3 2.5e-3 3.83e-3 0 0 0]; % set limits for joints L(1).qlim=[deg2rad(-160) deg2rad(160)]; L(2).qlim=[deg2rad(-125) deg2rad(125)]; L(3).qlim=[deg2rad(-270) deg2rad(90)]; %build the robot model rob = SerialLink(L, 'name','rob'); qready = [0 -pi/6 pi/6 pi/3 ]; m = [1 1 1 1 0 0]; % mask matrix T0 = fkine(rob, qready); t = [0:.056:2]; % do inverse kinematics qreach = rob.ikine(T0, preach, m); [q,qd,qdd]=jtraj(qready,qreach,t); %compute inverse dynamics using recursive Newton-Euler algorithm tauf = rne(rob, q, qd, qdd); % forward dynamics [t1,Q,Qd] = rob.fdyn(2,tauf(5,:),q(5,:), qd(5,:)); But due to qreach = rob.ikine(T0, preach, m); it shows error Index exceeds matrix dimensions. Error in SerialLink/jacobn (line 64) U = L(j).A(q(j)) * U; Error in SerialLink/jacob0 (line 56) Jn = jacobn(robot, q); % Jacobian from joint to wrist space Error in SerialLink/ikine (line 153) J0 = jacob0(robot, q); Can anybody explain me why this is happening and how to resolve it. Thanks.
I want to make path planning algorithm for a quadrotor with RRT in my thesis. I have searched lots of articles and come up with the concept of "dynamic RTT" and one of the articles has a title "kinodynamic RRT*". I have emailed the author of the article with no response. The main point that I couldn't understand is, we need to sample random state for dynamic RRT like 2 position and 2 velocity values for planar vehicle or an angle and its rate in case of 2D-quadrotor. How should the samples be so that speeds and positions does not confused and when should I consider the saturation limits of the actuators or vehicle acceleration limits. I can't understand how to handle what if two consecutive samples for positions are A(0,0) and B(10,10) this needs positive velocity at the point B but sampling can cause negative velocity. Am I wrong? Other issue is, how should the control signal be determined so that it can be applied for duration of delta t to move as close as possible to the sampled point. I am not sure how to determine the input or move the vehicle. Do I need optimizations so that it can reach to the sampled point in shortest time possible? Please let me know if there is a missing part to be understood. Thanks in advance. Wish a hopeful new year.
I am using gyroscope only to get real time angles as I move the IMU using a micro controller. I am able to get angles at a pretty decent accuracy(2 to 3 degree error). I am using quaternions for obtaining angles. The angles are with respect to the initial position of the IMU. I rotate the IMU around individual axis i.e around any one Axis at a time, I get good accuracy. But when I rotate around all axis at once the problem starts as stated below. Problem: The problem here is when I give a pitch and then yaw all seems fine. Now at this position of pitch and yaw, I give a roll, the other two angles also change. The gyroscope gives raw angular rate in DPS, which is converted to radian/sec for processing. The quaternions are calculated from raw angular rate. The code to convert quaternions to euler angles is as follows, //Local variables for clarity fqw = gsangleparam.fquaternion[0]; fqx = gsangleparam.fquaternion[1]; fqy = gsangleparam.fquaternion[2]; fqz = gsangleparam.fquaternion[3]; //Calculate Angles*/ gsangles.yaw = atan2(2 * (fqx*fqy + fqw*fqz), (fqw*fqw + fqx*fqx - fqy*fqy - fqz*fqz)) * 180/PI; gsangles.pitch = asin(-2 * (fqx*fqz - fqw*fqy)) * 180/PI; gsangles.roll = atan2(2 * (fqy*fqz + fqw*fqx), (fqw*fqw - fqx*fqx - fqy*fqy + fqz*fqz)) * 180/PI; The Problem of roll is present only when initial position(and also 180 deg counterpart) is as follows: X - Facing left Y - Facing towards us Z - Facing UP The problem is, When I give a pitch(around X here),yaw(around Z) and then roll(around Y here) or any sequence of roll pitch and yaw, then yaw changes. In fact whenever I rotate about Y axis(roll here), Yaw(around Z) changes. There is no problem when the initial position is(and also 180 deg counterpart) as follows, X - Facing away from us Y - Facing towards us Z - Facing UP There is no problem. i.e When I give a pitch(around Y here),yaw(around Z) and then roll(around X here) or any sequence of roll pitch and yaw, there is no problem. Why is this happening?? can anyone please explain. Thanks in advance.
I'm a beginner for quadcopter construction with PID controller and PIC 16F877 microcontroller. Now, I'm troubling with how do make the control process with mathematical model for my quadcopter :( and how to draw flow chart for this?
Currently, I'm building a quadcopter using Arduino. To make the copter able to stabilize I use an mpu6050 accelerometer + gyroscope. I understand that I can get the angle of rotation by integrating the values of the gyro. I understand too that I can calculate the angle using the amount of G working on the different axes of the accelerometer. But from the accelerometer I get values from around 1000 to 14000. What are these values? How can I get the angle from these values? How can I turn these values into motor rotation?
I am doing a project on calibrating stereo ZED camera and finding its accuracy and compare with the Manufacturer's accuracy of 1% at 1m depth accuracy. For this purpose , the formula to calculate the depth accuracy is $dz = (z^2 * de) / (f * b)$ but how do we calculate $z$ , $de$ and $f$. Is is taken from matlab stereo-callibration app which gives 'Stereoparameter' ? $dz$ is the depth error in meters, $z$ is the depth in meters, $de$ is the disparity error in pixels, $f$ is the focal length of the camera in pixels and $b$ is the camera baseline in meters.
I have been working on writing a code to control the iRobot Create movements (forward, backward, right, left and stop) from serial monitor and finally I got the correct code. I was trying to understand how to make stop moving when it face an obstacle but I couldn't get it. Also, I didn't know how to make move for a specific distance. Could anyone help me with this? Here is my code: #include "Arduino.h" #include "Morse.h" #include <SoftwareSerial.h> #define rxPin 10 #define txPin 11 SoftwareSerial softSerial = SoftwareSerial(rxPin, txPin); char inByte = 0; // incoming serial byte irobot::irobot(int pin) { pinMode(pin, OUTPUT); _pin = pin; } void irobot::Begin() { delay(2000); // Needed to let the robot initialize // define pin modes for software tx, rx pins: pinMode(rxPin, INPUT); pinMode(txPin, OUTPUT); // start the the SoftwareSerial port 19200 bps (robotís default) softSerial.begin(19200); // start hardware serial port Serial.begin(19200); softSerial.write(128); // This command starts the OI. softSerial.write(131); // set mode to safe (see p.7 of OI manual) delay(2000); } void irobot::runIt() { softSerial.write(142); // requests the OI to send a packet of // sensor data bytes softSerial.write(9); // request cliff sensor value specifically delay(250); // poll sensor 4 times a second if (Serial.available() > 0) { inByte = Serial.read(); if(inByte == '8') { goForward(); } if(inByte == '2'){ goBackward(); } if(inByte == '4') goLeft(); if(inByte == '6') goRight(); if(inByte == '5') stop(); } Serial.println(inByte); } void irobot::goForward() { // Drive op code softSerial.write(137); // Velocity (-500 ñ 500 mm/s) softSerial.write((byte)0); softSerial.write((byte)200); //Radius (-2000 - 2000 mm) softSerial.write((byte)128); softSerial.write((byte)0); } void irobot::goBackward() { // Drive op code softSerial.write(137); // Velocity (-500 ñ 500 mm/s) softSerial.write(255); softSerial.write(56); //Radius (-2000 - 2000 mm) softSerial.write((byte)128); softSerial.write((byte)0); } void irobot::goLeft() { // Drive op code softSerial.write(137); // Velocity (-500 ñ 500 mm/s) softSerial.write((byte)0); softSerial.write((byte)200); //Radius (-2000 - 2000 mm) softSerial.write((byte)0); softSerial.write((byte)1); } void irobot::goRight() { // Drive op code softSerial.write(137); // Velocity (-500 ñ 500 mm/s) softSerial.write((byte)0); softSerial.write((byte)200); //Radius (-2000 - 2000 mm) softSerial.write((byte)255); softSerial.write((byte)255); } void irobot::stop() { softSerial.write(137); softSerial.write((byte)0); softSerial.write((byte)0); //Radius (-2000 - 2000 mm) softSerial.write((byte)0); softSerial.write((byte)0); }
In my course of "Advanced Robotics" with "Fundamental of Robotic Mechanical Systems" as the reference book I saw the following equation as the velocity relation for parallel manipulators such as the one depicted on the pic. $$\mathbf{J}\mathbf{\dot{\theta}}=\mathbf{K}\mathbf{t}$$ $\mathbf{\dot{\theta}}$ is the joint rate vector and $\mathbf{t}$ is the twist. where $\theta_{J1}$ for $1\leq J\leq 3$ is the actuator. It was indicated there that usually the matrix $\mathbf{J}$ is a diagonal matrix referring to the borders of the work space. But the proof for this velocity relation was just given case by case. In another word there was no general proof of this equation. So here are my questions Is there a way to derive a general velocity formula for parallel manipulators ? (a formula that would show the relation between joint rates and twists). Are there cases in which $\mathbf{J}$ is not a diagonal matrix ?
I'm doing inverse kinematics for 4 dof robot using robotics toolbox matlab. The code is given below: preach = [0.326 0.223 0.342]; % reach point of end-effector % create links using D-H parameters % Link('d', 0.15005, 'a', 0.0203, 'alpha', -pi/2) L(1) = Link([0 0 0.15 pi/2 0], 'standard'); L(2) = Link([0 0 0.15 0 0], 'standard'); L(3) = Link([0 0 0.15 0 0], 'standard'); L(4) = Link([0 0 0.15 0 0], 'standard'); % set limits for joints L(1).qlim=[deg2rad(-160) deg2rad(160)]; L(2).qlim=[deg2rad(-45) deg2rad(45)]; L(3).qlim=[deg2rad(-60) deg2rad(60)]; L(4).qlim=[deg2rad(-50) deg2rad(50)]; %build the robot model rob = SerialLink(L, 'name','rob'); qready = [0 0 0 0]; % initial position of robot plot(rob,qready,'noname'); T1= transl(preach); % convert of reach point of 4x4 homogenous matrix [qreach,err] = rob.ikcon(T1, qready); % find inverse kinematics with error Matlab shows results like this(using robotics toolbox ): >> [qreach,err] = rob.ikcon(T1, qready) qreach = 2.7925 0.7854 1.0472 0.8727 err = 9.6055 I'm not taking preach = [0.326 0.223 0.342]; randomly. Infact, first I do forward kinemtics to get these points. code is below: % to find forward kinemtics qreadyrr = [0.6 0.45 0.63 0.22]; % setting the four angles randomly within range to get preach T0 = fkine(rob, qreadyrr); then, I got T0 as >> T0 T0 = 0.2208 -0.7953 0.5646 0.3267 0.1510 -0.5441 -0.8253 0.2235 0.9636 0.2675 0.0000 0.3421 0 0 0 1.0000 Also, when I put this T0 in place of T1 in inverse kinematics code as given above, the values I got is very accurate with negligible error. >> [qreach,err] = rob.ikcon(T0, qready) qreach = 0.6002 0.4502 0.6296 0.2204 err = 4.6153e-07 The point is, in my case, I have only px, py and pz values for transformation matrix but with this, inverse kinematics is not solving it correctly. I want to do inverse kinematics px, py and pz values. how can I do it correctly. Thanks.
I saw this link today. This robot seems real, but redditors on a reddit thread argued that it might be CGI. It doesn't seem unfeasible to make such as robot IMO. There isn't anything out of the norm from the video except the size. Plus, there exist Kurata and Megabot, though they don't make bipedal robot, the scale is similar. From roboticist's point view, is this robot real? What are the technological limitation preventing such a robot being developed if it's fake?
Let the forward kinematics map be denoted by $\mathcal{F}$ such that $\mathcal{F}: \theta \in \mathbb{R}^{n} \rightarrow g \in SE3$ Let the minimal representation of $g$ be given by $x \in \mathbb{R}^{6}$ using axis-angle or other forms of attitude parametrization. If we differentiate the forward kinematics map, we get $\dot{x} = J_{a}\dot{\theta}$, where $J_{a}$ is the analytical Jacobian. This equation is commonly used in numerical inverse kinematics. However, can we do the reverse? $x(t_{f})-x(t_0) = \int^{t_{f}}_{t_{0}}J_{a}\dot{\theta}dt$
I'm using simulink support package for arduino to read serial data from port2 in Arduino due. My plan is to read signed integers (-415 for example) representing motor speed and feed it to the pid controllers as in the image. from the far end i'm sending delimited data in the following shape . The matlab function in sumlink is supposed to read the received ASCII characters and add them to a variable until it reaches the end character '>'. I'm using the following simple code just to give the output on both the Right and Left to check if I'm receiving the correct data, However I'm not. function [Right ,Left] = fcn(data,status) SON = '<'; EON = '>'; persistent receivedNumber; receivedNumber = 0; persistent negative; negative = false; if(status ==1) switch(data) case EON if (negative) receivedNumber = -1*receivedNumber; else receivedNumber = 1*receivedNumber; end case SON receivedNumber = 0; negative = false; case {'0','1','2','3','4','5','6','7','8','9'} receivedNumber = 10*receivedNumber; receivedNumber = receivedNumber + double((data - 48)); case '-' negative = true; end end Right = receivedNumber; Left = receivedNumber; end Can anybody tells if there are any other approaches to read multiple signed digits in simulink? Taking into consideration that I have to use the support package for Arduino since my pid controllers are already configured in Simulink and interfaced with port2 in Arduino (which will be connected to BeagleBone black)
Rewriting this whole question as I've learned a lot more since I first tried to ask. I'm building a tiny bot and looking to use two motorized wheels for movement, but only one motor. I'll have a 3rd wheel (caster) for balance. My goal is to have the wheels move opposite to each other when the motor turns and each wheel reverse direction when the motor reverses direction. The tricky part is I want the ratio to be uneven between the wheels. By this, I mean when one wheel rotates clockwise, it should rotate faster than the other wheel rotating counter-clockwise. This needs to hold true regardless of which wheel is rotating which way. The end result should be that I could run the motor in one direction continuously to have my bot drive in a small circle. If I switch motor directions, it would drive in the same size circle the opposite direction. If I alternate motor directions frequently enough, the bot should move in a fairly straight line (or if I alternate the motor too infrequently, an S like pattern). The closest I can envision so far (thanks to those that have responded here) is to use bevel/miter gears to have my wheels rotate opposite to each other and then to use two differently sized ratcheting mechanisms per wheel working in opposite directions. This would allow each ratcheting mechanism to trigger only in one direction and the RPM per direction would be related to the size of that direction's ratcheting mechanism. Is this the best way to achieve my goal? Is there a name for this concept or is it so uncommon I'll have to build/fabricate it all myself? As far as a ratcheting mechanism, I believe I'd be looking to use a freewheel clutch or I'd need an oscillating lever carrying a driving pawl over top a ratchet gear. My biggest struggle in finding affordable parts is the ratcheting mechanism.
If I can constrain two robots to a hypothetical box, what sensors or common formal methods exist that would enable the two robots to meet? I am specifically interested in the communication of relative position between robots.
As part of my PhD field work it would be useful to have latitude/longitude measurements for the locations of ant nests that I am working on. These ant nests can be as close as 50cm together so the accuracy of the system would (I think) need to be higher than is available from a phone or basic GPS system. Does anyone know what system would be best for getting this sort of accuracy? My budget is probably only around £200. If it isn't possible in that price range it would be good to know what system I would need to use so that I can see if I can just borrow such a system. Thanks a lot!
I need to use the parallel axis theorem to determine the moment of inertia of each robotic arm link referred to the joints of the 3 DOF manipulator. The moments of each link's centre of mass are: The link joints are located according to the following coordinates from each link's centre of mass and the link masses are specified as: I know that solve Izz, Iyy and Ixx to use the following equations: Ixx = Ixx + m(ry^2 + rz^2) Izz = Izz + m(rx^2 + ry^2) Iyy = Iyy + m(rx^2 + rz^2) But I'm not sure how to solve this particular problem, I was thinking since y1 and z1 both equal 0, to solve the question for that link I need to solve I1yy and I1zz is that right?
I'm current workin on an EKF for an carlike robot and I'm not sure how to transform the IMU accelerometer data to offset the roll, pitch and yaw and get the acceleration with respect to the ground. I'm only computing the x,y position of the robot and the yaw of this vehicle.
Given the differential equation of a current controlled electric actuator, how would I convert the differential equation into its Laplace transform? $$ J \frac{d^2 \phi(t)}{dt^2} + D \frac{d \phi(t)}{dt} + H \phi(t) = K i(t) = 0 $$ I know the reason you're meant to convert this to laplace is because it's easier to work with but I'm finding it difficult to understand why you do certain things in certain parts. Any concrete method on how to convert this would be appreciated.
I was wondering if anyone knew of any easy-to-use plug and play systems, that allow you to build and programme a number of actuators and motors quickly and easily? I'm thinking something like Lego Mindstorms, but for adults. I.e. with more powerful motors. I need to be able to generate a torque of around 10 Nm, at a speed of around 10 rpm, and I need to control the rotation angles. Could be either stepper motors, or normal DC motors with hall sensors to measure number of rotations. I'm at the beginning of my journey into robotics, so don't mind paying a bit more for the parts in favour of speeding up my learning curve! If it's not clear already, I'm looking for an "off-the-shelf" product, with a range of products I can browse, and accumulate an increasing number of components over the years. Many thanks in advance for any suggestions!
If Amazon plans to deliver packages using drones, how does it ensure the drones reach their destination safely and come back? Won't thieves be able to easily steal packages and drones by shooting them down? What about threat from birds like falcons?
I have an M5 threaded rod that penetrates a box. Outside it is driven by a motor and it rotates a part inside the box. The box is sealed and I would like to seal the area around the rod as well. I have seen double lip oil seals but I don't think they will work. The oil is very runny, like water and the reason I am not using a smooth rod (which would be simpler to use in this application) is mainly in order to save on tooling costs as I can get the rods already threaded for very cheap. Any help would be appreciated!
Is learning microcontroller programming useful for one who want to specialize in robotics I don't mean platforms like Arduino , Teensy , I mean mcu itself like Arm cortex etc.
I'm studying for an exam and I came across these questions where you have to determine the number of possible inverse kinematic equations like (qs from Craig Introduction To Robotics, 3rd Edition) How exactly do you find the number of solutions? I know how to find the DH parameters and then the inverse kinematics but is there some way to calculate the number of solutions quickly like this question suggests? Like, I can visualize it (4 for the second one) but is there a proper way?
I'm new here, my name is Mark, I'm 24 years old and I'm a Linux lover, by 2020 I intend to build up a wildlife guardian drone based on the Raspberry pi which will be connected to the internet but that will operate by its own using artificial intelligence. Talking about this to a technician at my school in Italy, I have been told that the Raspberry is not suitable for an important project but just for little experiments. The way he said that seemed to me just a bit too dismissive but he made me become doubtful about the Raspberry's quality so I hope you guys can help me understand if the Raspberry can suite my ambitious plan to build a guardian drone to protect the wildlife of a very little Papua New Guinea island or if it will break the first time it encounters trouble. Is the Raspberry pi 3 only for little experiments or will it also work for very important stable projects like building robots?
Do we need to install extra software on the EV3 or does Robotary (https://robotaryapp.com/) install programs just like the default LEGO Software. Thanks, Josh
I'm working on a TWDD line-following robot using qtr-8a reflectance sensor array. The sensors are connected to BeagleBone Black, and BBB is sending the speed serially to an arduino Due. My approach is using a PID controller for the sensor so the error equals to zero when the robot is centred on the line and a positive/negative error depending on the robot's position. Applying Trial and error method I finally reached a Kp value that tracks straight lines perfectly. However, I'm still unable to turn and stay on the line even on a similarly smooth turns. I guess this is related to the Kd value. I'm not using the integral part Ki since the error is keep increasing. I tried to set conditions when the robots is drifting away from the line but it's not working probably (even without the conditions it is somehow turning smoother but then losing the line) I'm using the following draft code: from bbio import * import time integral = 0 last_prop = 0 Kp = 20 Ki= 0 Kd = 150 amax = list(0 for i in range(0,8)) amin = list(1024 for i in range(0,8)) timeout = time.time() + 10 # Read ADC data from MCP3008 # ch: 0-7, ADC channel # cs: 0-1, SPI chip select # See MCP3008 datasheet p.21 def adc_read(cs, ch): spidata = SPI1.transfer(cs, [1,(8+ch)<<4, 0]) data = ((spidata[1] & 3) << 8) + spidata[2] return data def setup(): # SPI1 setup Serial2.begin(9600) Serial5.begin(9600) pinMode(GPIO1_7, OUTPUT) digitalWrite(GPIO1_7, LOW) SPI1.begin() SPI1.setMaxFrequency(0,50000) # => ~47kHz, higher gives occasional false readings # SPI1.setMaxFrequency(1,50000) calibrate() # reading the IR sensor data def read_sensors(): sensors = [] for i in range(8): sensors.append(adc_read(0,i)) return sensors # calculating the error from PID controller def calc_pid(x,sp): global integral, last_prop , Kp, Ki ,Kd set_point = sp pos = sensor_average(x)/sensor_sum(x) prop = pos - set_point integral = integral + prop deriv = prop - last_prop last_prop = prop error = (prop*Kp + integral*Ki + deriv*Kd)/100 return error def get_position(s): return sensor_average(s)/sensor_sum(s) def sensor_average(x): avg = 0 for i in range(8): avg += x[i]*i*100 return avg def sensor_sum(x): sum = 0 for i in range(8): sum += x[i] return sum def get_sensor(x): j= read_sensors()[x] return j def calc_setpoint(x,y): avg = 0 sum = 0 for i in range(8): avg += (x[i]-y[i])*i*100 sum += x[i]-y[i] return avg/sum # calibrate the sensors reading to fit any given line def calibrate(): global amin global amax while(time.time() < timeout): for i in range(0,8): amin[i] = min(amin[i],read_sensors()[i]) amax[i] = max(amax[i],read_sensors()[i]) digitalWrite(GPIO1_7, HIGH) digitalWrite(GPIO1_7,LOW) # calculating the correspondent speed def calc_speed(error): avg_speed = 150 min = 100 pos = get_position(read_sensors()) speed = [] if(error < -35): right = avg_speed -(2*error) left = avg_speed + (2*error) elif(error > 35): right =avg_speed - (2*error) left = avg_speed + (2*error) else: right = avg_speed - (error) left = avg_speed + (error) speed.append(right) speed.append(left) return speed def loop(): s = read_sensors() setpoint = calc_setpoint(amax,amin) position = get_position(s) err = calc_pid(s,setpoint) print err #print "divided by 100:" speeds = calc_speed(err) print speeds right_motor = speeds[0] left_motor = speeds[1] Serial2.write(right_motor) Serial5.write(left_motor) delay(10) run(setup,loop) PS: the sent speed over serial is limited to 255 and I'm multiplying it by a factor from the Arduino side.
For a sorting machine where the objects to be sorted have various sizes, color, shapes, and patterns, is it more optimal (in terms of minimal time of the overall process and maximal precision and accuracy) to use a sorting algorithm or to use different dimensions in the physical design to do the sorting?
I’m in the early stages of designing a self-balancing robot as a way to refresh my knowledge on control theory, which has been gradually slipping away since graduating about a year and a half ago. I'm wondering if anyone has any input on how best to approach this problem. My plan is to use a 6 DOF IMU as an angle sensor, and to control the pitch by accelerating and decelerating the cart. I'm looking for robust response to disturbance, and to add in RC differential drive capabilities later on. This block diagram is a pretty close match for what I was planning to do, (source: Sebastian Nilsson's Blog): Would this be a good approach? Any recommended alternatives? Thanks.
Earlier while writing by hand, I was wondering if anyone could build a pen holder, that could write for me? Maybe you could make some good business from them. Thanks for your time.
I have a transfer function G(s)=1/s(s+10) with sampling and holding, it is easy to relocate poles at 0.5 and -0.5 using a state feedback controller. But if the proportional controller is added to the transfer function, as in G(s)=K/s(s+10), how can the state feedback controller be designed?
I just bought the Crazyflie 2.0 drone. This is my first drone, and it is my first time programming a drone. My first goal is simple: Make the drone hover in place stably for 10 seconds. I found a simple example script that turns up the thrust and then lands the drone. I modified this to 1) extend the time to 10 seconds, 2) reverse thrust if the drone starts tipping, and 3) constantly display the roll, pitch, and yaw in the console. When I run this, the drone often flies randomly around the room and runs into things; it does not just lift up and hover stably. Why is this? How can I improve things so that it's much more stable? Do I need more sensors, or can I pull this off with just programming? """ Simple example that connects to the first Crazyflie found, hovers, and disconnects. """ import time import sys from threading import Thread import logging import cflib # noqa from cfclient.utils.logconfigreader import LogConfig # noqa from cflib.crazyflie import Crazyflie # noqa logging.basicConfig(level=logging.ERROR) class HoverTest: """Example that connects to the first Crazyflie found, hovers, and disconnects.""" def __init__(self, link_uri): """ Initialize and run the example with the specified link_uri """ self._cf = Crazyflie() self._cf.connected.add_callback(self._connected) self._cf.disconnected.add_callback(self._disconnected) self._cf.connection_failed.add_callback(self._connection_failed) self._cf.connection_lost.add_callback(self._connection_lost) self._cf.open_link(link_uri) self._status = {} self._status['gyro.x'] = 0 self._status['gyro.y'] = 0 self._status['gyro.z'] = 0 print("Connecting to %s" % link_uri) def _connected(self, link_uri): """ This callback is called form the Crazyflie API when a Crazyflie has been connected and the TOCs have been downloaded.""" print("Connected to %s" % link_uri) # The definition of the logconfig can be made before connecting self._lg_gryo = LogConfig(name="gyro", period_in_ms=10) self._lg_gryo.add_variable("gyro.x", "float") self._lg_gryo.add_variable("gyro.y", "float") self._lg_gryo.add_variable("gyro.z", "float") # Adding the configuration cannot be done until a Crazyflie is # connected, since we need to check that the variables we # would like to log are in the TOC. try: self._cf.log.add_config(self._lg_gryo) # This callback will receive the data self._lg_gryo.data_received_cb.add_callback(self._gryo_log_data) # This callback will be called on errors self._lg_gryo.error_cb.add_callback(self._gryo_log_error) # Start the logging self._lg_gryo.start() except KeyError as e: print("Could not start log configuration," "{} not found in TOC".format(str(e))) except AttributeError: print("Could not add gyro log config, bad configuration.") # Start a separate thread to do the motor test. # Do not hijack the calling thread! Thread(target=self._hover).start() def _connection_failed(self, link_uri, msg): """Callback when connection initial connection fails (i.e no Crazyflie at the specified address)""" print("Connection to %s failed: %s" % (link_uri, msg)) def _connection_lost(self, link_uri, msg): """Callback when disconnected after a connection has been made (i.e Crazyflie moves out of range)""" print("Connection to %s lost: %s" % (link_uri, msg)) def _disconnected(self, link_uri): """Callback when the Crazyflie is disconnected (called in all cases)""" print("Disconnected from %s" % link_uri) def _gryo_log_error(self, logconf, msg): """Callback from the log API when an error occurs""" print("Error when logging %s: %s" % (logconf.name, msg)) def _gryo_log_data(self, timestamp, data, logconf): """Callback froma the log API when data arrives""" # print("[%d][%s]: %s" % (timestamp, logconf.name, data)) self._status['gyro.x'] = data['gyro.x'] self._status['gyro.y'] = data['gyro.y'] self._status['gyro.z'] = data['gyro.z'] def _hover(self): start_time = time.time() run_time = 7 thrust_multiplier = 1 thrust_step = 500 thrust = 20000 max_thrust = 39000 roll = -1.00 pitch = -2.00 yaw = 0 # Unlock startup thrust protection. self._cf.commander.send_setpoint(0, 0, 0, 0) # Turn on altitude hold. self._cf.param.set_value("flightmode.althold","True") while thrust >= 20000: # Update the position. self._cf.commander.send_setpoint(roll, pitch, yaw, thrust) time.sleep(0.01) if thrust >= max_thrust and time.time() - start_time >= run_time: # Reverse thrust thrust_multiplier = -1 if thrust <= max_thrust or thrust_multiplier == -1: thrust += thrust_step * thrust_multiplier # Reverse thrust if the drone tips over. if abs(self._status['gyro.x']) >= 75 or abs(self._status['gyro.y']) >= 75: print('Aborting') thrust_multiplier = -2 print(self._status['gyro.x'], self._status['gyro.y'], self._status['gyro.z']) self._cf.commander.send_setpoint(0, 0, 0, 0) # Make sure that the last packet leaves before the link is closed # since the message queue is not flushed before closing time.sleep(0.1) self._cf.close_link() if __name__ == '__main__': # Initialize the low-level drivers (don't list the debug drivers) cflib.crtp.init_drivers(enable_debug_driver=False) # Scan for Crazyflies and use the first one found print("Scanning interfaces for Crazyflies...") available = cflib.crtp.scan_interfaces() print("Crazyflies found:") for i in available: print(i[0]) if len(available) > 0: le = HoverTest(available[0][0]) else: print("No Crazyflies found, cannot run example") As you can see, I tried simply adjusting the roll and pitch (to -1.00 and -2.00), but that did not help much. When I use the GUI and a joystick to control the drone and I adjust the roll and pitch trim values to -1.00 and -2.00, this definitely helps stabilize the drone. Any ideas are welcome. Thank you!
Would you happen to know some good books, tutorials or articles on how to detect objects and their poses, using 2D laser scanners? My goal is to equip a mobile robot with a laser scanner for object detection in a industrial like environment. I would like to detect legs, some pallets and some trolleys, and measure their poses as well. My first intuition is extracting lines from the 2D readings. But then I'm somewhat lost in the next steps.
I am working on a MEMS based project which requires me to calculate the orientation(Euler Angles) of an object using only GYROSCOPES. The GYROSCOPE BIAS is calculated in the beginning for 2 seconds keeping it stationary. Right now GYROSCOPES give me an accuracy of 2 degrees for a period of 3-4 mins of continuous movement. Now if there is continuous movement beyond 3-4 mins, the GYROSCOPE BIAS would have changed and hence the errors increases rapidly. My Question is: 1). If the bias drift changes randomly(as read),the angles start increasing in one direction, then why cant we track it for first ten seconds and then keep subtracting the present angles from initially calculated angles,for every ten seconds during movement. I tried this out, but it does not work as expected. Can GYROSCOPE BIAS be tracked by any way? Thanks in advance.
I'm using MPU6050(accelerometer+gyro) connected to ATmega328P microcontroller, but probably it isn't even important in my case. In my project, I need an angle around the X axis. And it's calculated like this: angle = -(atan2(acc.XAxis, sqrt(acc.YAxis*acc.YAxis + acc.ZAxis*acc.ZAxis))*180.0)/M_PI; where acc is vector of accelerations in all directions. The problem is, that it gives me credible value only when angle between Z axis and ground is right (so it's not rotated around Y axis). When I start to rotate it around Y axis it also changes value of X axis rotation. I know, that it's due to YAxis acceleration in my algorithm, but I have no idea how to get rid of it. How can I solve this problem?
First of all, this might be a stupid or bad question but i am quite new to this topic. I want to build a rc transmitter (with some joystick buttons, an arduino and a hc-12 transceiver). I've searched a lot about this topic. But i still have a question that rests unanswered. Why is it necessary to use multiple channnels to control for example pitch,jaw,throttle of a quadcopter. transmitters in shops have 4 or 6 channels but i don't understand why these different channels are necessary. These transmitter send the information of each button over a different channel, why is this necassary? Is it not possible to send the commands over one channel (all at the same frequency)? For example send p30 for a pitch of 30 degrees and j30 for a jaw of 30 degrees? Than, the receiver can interpret this as well? I guess it is to send al the commands on the same time? Thanks in advance
Is gmapping on OpenSLAM.org still maintained? Or is the maintenance entirely over in ROS ( https://github.com/ros-perception/slam_gmapping/tree/hydro-devel/gmapping ). When trying to compile gmapping without ROS I noticed that it still has Qt3 as a dependency which made me think no one uses or maintains OpenSLAM's gmapping anymore. Is this accurate? How does OpenSLAM's gmapping vs ROS's gmapping compare in terms of performance and accuracy? Thanks!
I am making a robotic device with the following servo motors. How do I go about calculating the battery capacity? It is supposed to run for 10 minutes. I am bit confused as to use the no-load current or load current or the stall current to add up the amps. Thanks Ro HS-422 HS-485HB HS-645MG HS-755HB HS-805BB HS-85BB HS-785HB
In my project, there are two types of servos ( 6v and 12v supply needs ). I need to power with the same battery. How should I go about it? Use a 6v battery with a step-up voltage regulator? How does that impact the battery power calculations? If voltage regulator should be used please recommend one. Thanks Ro
Am building a device that will use two stepper motors and a servo. One of the main requirements, is that the on-board battery cannot exceed 9.6 volts. The stepper motors have not been picked yet, and thus I have the option of choosing one with a lower voltage. Which method would be the best: Use two 12v stepper motors with a 9.6 volt battery, and simply have them run at lower voltage. Or, Use two 12v steppers, and use a DC to DC voltage booster to increase voltage to 12v. ( I don't know how well this would work, regarding constant current and whatnot.) Or, Use two 6v steppers, and use a ~6v battery. (Not fond of this idea, as 6v steppers seem to have much different specs than normal Nema17 12v ones. However, I would still use them if needed.)
I've read that recent Roomba 6XX from iRobot got a serial connector, but I removed the cover on my Roomba 681, and I haven't found any. From what I've found, 681 hasn't the same cover than other 6XX robots. Does this model have an accesible serial port?
I am working on a project and I need to know what is the exact model number of the CPU that is in the Lego Mindstorms RCX 1.0 . It would be nice if you listed the specs of the CPU, beacause I am looking for its MIPS performance.
Is there a possibility of making robots which utilise raw materials to create more versions of themselves? Clarifying the point what I would like to ask is that whether there are are any projects or research on robots that can reproduce!!!
I'm designing a joint which will have to move with a velocity of ~60RPM and I have to come up with resolution requirement for the encoders within this joint. I notice however that this is easier said than done. A 1m beam will be connected to the joint, I figured I would need to know the position of the tip of the beam with a resolution of approximately 1mm. with some simple calculations I found that for this a 12-bit encoder will be sufficient. However, I was wondering whether this would be sufficient to actually control the joint in a smooth manner. I found some information about how the resolution influences the joint behavior but didn't find anything about how to use this to find resolution requirements. For example I found: "When you tune the constants right, you should be able to run your arm at a constant speed. However, this is dependent that you have a fairly high-resolution encoder," I have no idea what a "fairly high-resolution encoder" is. I was wondering if any of you have any experience with this or know any methods to determine the required resolution.
I have a quadcopter using image processing to detect shapes on the ground to the determine x,y position. It works ok. My problem is that if the illumination isnt perfect. I would like to know if there's a way to fuse the image processing data with another kind of way to find the position x,y of the quadcopter. My teacher told me about Bluetooth Low Energy and some beacons but I dont think it will work.
so I am working on a Self Balancing Bot with Arduino Mega. I'm using 12V 200 RPM motors with built in 840 PPR quadrature encoders. The torque rating is as follows: Rated Torque: 2.4 Kg-cm Stall Torque: 6 Kg-cm As of now I've implemented a simple PID controller (based on Brett Beauregard's PID library) to minimize only the tilt angle error. Still haven't implemented PID using the encoder for the position error. I've tried a lot to tune the PID values. The robot is quite stable when it is standing its own. However, here's the issue, when the robot tilts by an angle greater than 7-8 degrees (or when it is pushed slightly) from its stable position, the motors start rotating at the maximum speed (PWM: 255) and the robot still doesn't seem to recover back and just keeps running and then finally falls down in the direction in which it was moving. It doesn't seem to recover at all. Is it just a problem with the PID tuning? As I said earlier, it's stable for the range -5 to +5 degrees (0 degrees is the upright position) without oscillating much. However it is not able to recover for errors greater than 7 degrees. Or could this be an issue with the robot being too heavy and insufficient counter motor torque? Total weight of the robot is around 1 kg (including the motors). I'm using 2200 mAh LiPo battery and have placed it at the highest level so as to decrease the angular acceleration. Would implementing the PID feedback for encoders as well solve this issue? Also do suggest some links on how to implement dual PID using an IMU and encoders to correct tilt angle and position. I'm planning to use cascaded PID controllers in which the encoder PID measures the error in position and its output will be the set-point for the angle PID loop which in turn will control the PWM of the motors. So if there's an offset in position the angle set-point will change to, let say -2 degrees (from upright 0 degrees). So now the angle PID will make the robot move in order to correct the angle offset and therefore the position error also gets corrected. Is this a good method. Any other suggestions or different approaches to this?
I have been searching for days looking for a dedicated software for easily draw kinematic diagrams of robotics like this: I came by this software but it is a bit hard to grasp. Is there any GUI based software instead?
I'm working on pick and place robotic arm with 4 dof. I'm using MATLAB for inverse kinematics. But, I want to know how to decide links length. Say, I have four point in space upto where my robotic arm should reach i.e at upper most point, lower most point, extreme right point and extreme left point.I want theory or any approach so that I can calculate link length using these points. Thanks. Edit: Add picture of robotic arm.
The purpose of the SLAM system is very specific, for detecting cones in an image and triangulate their position to create a map. The data input would be the camera data, odometry and the LIDAR data. I have been going through SLAM algorithms on openSLAM.org and through other implementations of SLAM systems. I would like to know if there are a set of SLAM algorithms specific for the problem I have and what are the most efficient and least time consuming SLAM algorithms available. Any leads would be helpful.
I define my robotic arm with following code % Link('d', 0.15005, 'a', 0.0203, 'alpha', -pi/2) L(1) = Link([0 0.15 0 pi/2 0], 'standard'); L(2) = Link([0 0 0.15 0 0], 'standard'); L(3) = Link([0 0 0.15 0 0], 'standard'); L(4) = Link([0 0 0.15 0 0], 'standard'); % set limits for joints L(1).qlim=[deg2rad(-45) deg2rad(45)]; L(2).qlim=[deg2rad(-45) deg2rad(45)]; L(3).qlim=[deg2rad(-60) deg2rad(60)]; L(4).qlim=[deg2rad(-50) deg2rad(50)]; %build the robot model rob = SerialLink(L, 'name','rob'); qready = [0 0 0 0]; % initial position of robot And I solve inverse kinematic and plot robotic arm with code Td = transl([0.05 0 -0.20]); q = rob.ikine(Td, qready,[1 1 1 0 0 0]); plot(rob,q,'noname'); Its results are 0 -139.0348635 82.65184319 -1.286384217 which is four angles named theta1, theta2, theta3 and theta4 respectively. Now the point is I gave joint limit for theta2 as -45 to 45 degree but result output is -139 degree. Same case with theta3. Why it is so? Another thing is when I plot these angles robotic arm cross each other as shown in figure. I want to know what is wrong with code or I'm missing any thing.
I have to solve an exercise for the Digital Control System course (using MATLAB software) which stands: "A ball is suspended inside a vertical tube by airflow 'u' and connected via a spring of stiffness K to the bottom of the tube. The ball is subjected to gravity and a viscous friction with coefficient 'B'. The force 'F' exerted on the ball by the airflow is proportional to the airflow 'u' via the constant G; airflow can only be positive (entering the tube)." I have also all the data needed to solve the problem numerically, but this is not important for the question. What I need to do is: "Write the system equations in state space form with airflow as input and the ball vertical position 'z' as output.Then, select a sampling time and design a digital control system that regulates the ball position by acting on the airflow to the following specifications: Zero steady-state error (in response to the desired altitude step input). Max overshoot: 30%; Settling time at 5% less than 8 seconds." After this, we have to compute the transfer function of the plant and put it in unitary feedback with the compensator. I usually write first the system dynamics equations, then from these I choose a suitable set of state variables and I write the matrices A, B, C and D according to the state variables (I use the 'ss' function). The problem is that I don't know how to consider the gravity in this case because it comes in the system dynamics as a constant term (-g*m). For example by considering the state variables as [z' z] I obtained the following matrices: A = [-B/m -K/m; 1 0]; B = [G/m; 0]; C = [0 1]; D = 0; I tried to design the compensator (a simple PID) without considering the gravity and by adding it later in the Simulink model used to test the system (after designing the compensator we have to build a Simulink model in which the discrete time compensator is tested with the transfer function of the continuous time system) but of course the system output is no more able to meet the requirements. For the gravity transfer function I considered to have the mass as input and the position as output Am I wrong in not considering the gravity when designing the compensator? Or perhaps, if correctly implemented the gravity should not affect the system output?
If I am trying to model the dynamics of a double-pendulum (on a horizontal plane without the effects of gravity), in which the second angle is constrained to range between values of [-10 deg, 10 deg], how would I derive the equations of motion? I'm having trouble identifying whether I would use some method involving solving the Lagrangian with holonomic or non-holonomic constraints.
I'm trying to make a lightweight method of outdoor road following for a small ground robot. In nearly all road detection work that I've seen, they all assume that the robot is already on the road, which allows for techniques like finding the vanishing point or sampling pixels near the bottom of the camera frame. However in my application, the robot can be a few meters away from the road and needs to first find the road. As the robot computation runs on an Android phone, I'm hoping to avoid heavy computer vision techniques, but also be robust to variable outdoor lighting conditions. Obviously there is a trade-off, but I'm willing to sacrifice some accuracy for speed and ease of computation. Any ideas on how to achieve this?
I came across the standard representation of stereo cameras where they are side to side and the epipolar lines are the same image scan line and the cameras have the same focal length. Source: http://vision.deis.unibo.it/~smatt Now, do all stereo camera pairs need to have the same focal length? And if not then how does that change the stereo depth calculation? Thank you.
I had this idea of using the PID Controller as the algorithm of the line following mechanism for my Robot. The problem (which is a nature behavior of the PID controller ) on the line gaps ( where there are no line e.g. 10 cm) the robot doesn't go straight forward but turns right . I thought about it a lot , and i couldn't find any better idea for modifying this algorithm to work in this situation but, adding two more sensors and specify the (white area on the 3 sensors ) as a special situation where the robot should go straight forward . Now my question is is there any better idea , that i can use ?
I'm using Robotics Toolox Matlab and I did Reverse dynamics using code below. [qreach,err] = rob.ikcon(T1, qready); [q,qd,qdd]=jtraj(qready,qreach,t); %compute inverse dynamics using recursive Newton-Euler algorithm tauf = rne(rob, q, qd, qdd); And when I plotted angular velocity and torque it shows some negative values as shown in Fig. I want to know why so and what is the physical significance of negative veloity and torque in robotics. Thanks.
I work with a XL20 robotic tube sorter(lets say it's called X) which is connected to lets say computer(Y - IP: 10.216.1.222 - Running win7) through serial(com port 8 at 9600 baud and odd parity). If I open putty at computer Y and send my commands to the Xl 20, it works just fine. Now I have a computer(Z- ip: 10.216.1.223 - Running win7) which is in the same network and I want to connect and send instruction to the Xl20 from this computer via putty or some other means. Basically I'm trying to remotely communicate to the XL20 which is connected to a computer through serial port, so can anyone point me to any useful guide, documentation, clue or suggestion about how can I do this? Thanks.
Is there a way to convert the EM field that surrounds high voltage power lines for direct usable electricity for air or ground drones for continuous flight or travel? Added: Yes could the EM field be used for a guide as well? https://electronics.stackexchange.com/questions/280558/can-high-voltage-power-lines-provide-a-super-highway-for-drones
I am working on scara robot project and I have one big confusion. I am using simple trigonometric way(tan inverse traingl formula) to calculate inverse kinematics . But lot of people suggested me to use DH algorithm to calculate inverse kinematics . Which one is better and faster algorithm for scara robot .
Can a PID with variable setpoint work? I have already done tests, but these are not conclusive. On the other hand I am not certain of this algorithm. Background This project involves improving the tracking speed of a telescope. An Arduino card (UNO32) that I program gives the instructions in "Pulse and Dir" mode to a driver. This driver controls a two-phase step motor (200 steps) in micro step mode (128 micro steps per step). This stepper motor rotates a worm screw against a wheel that has 360 teeth. And finally, this wheel rotates the axis of the telescope to make one revolution per day. To reduce the speed error caused by imprecision of the wheel and screw, a high precision encoder (1800 000 PPR) is placed at the end of the axis of rotation of the telescope. My goal is to work in closed loop with the stepper motor and the encoder feedback. I need your advice for the PID algorithm. Here is my idea but I am not sure that this is valid: 1) SP (Setpoint) : the desired position of the encoder. The encoder is in 4x reading mode. At time t, this gives : SP(t)= ( t sec) * (4* 1800000 PPR)/(86164 sec/day) I can convert into micro-step : (360 turns_motor)(200 steps)(128 micro-steps) = 4*1800 000 encoder pulses 2) PV : the process variable. This is the measured position of the encoder. 3) E : the error , E= SP-PV 4) Command : I act on the frequency of the pulses to vary the speed of the motor according to this error
We're students trying to make a clawbot for a Science Seminar class. However, for some reason, whenever we try to move the arm or the claw in a certain way, it will lock up and only move that direction. Code attached. Please help. #pragma config(Motor, port1, frWheel, tmotornormal, openLoop, reversed) //Setting up the motors #pragma config(Motor, port5, brWheel, tmotornormal, openLoop, reversed) #pragma config(Motor, port3, flWheel, tmotornormal, openLoop) #pragma config(Motor, port4, blWheel, tmotornormal, openLoop) #pragma config(Motor, port10, Arm, tmotornormal, openLoop) #pragma config(Motor, port6, Claw, tmotornormal, openLoop) task main() { int a = 0; //Arm integer int c = 0; //Claw integer while(true) { motor[frWheel] = vexRT(Ch2); //Wheels motor[brWheel] = vexRT(Ch2); motor[flWheel] = vexRT(Ch3); motor[blWheel] = vexRT(Ch3); if(a >= -30 && a <= 30) { if(vexRT[Btn8D] == 1) //If arm down button pressed... { motor[Arm] = --a; //then arm will go down. } else if(vexRT[Btn8U] == 1) { motor[Arm] = ++a; } else(vexRT[Btn8U] == 0 && vexRT[Btn8D] == 0); { motor[Arm] = a; } } else { } if(c <= 30 && c >= -30) { if(vexRT[Btn7U] == 1) //If claw up button pressed... { motor[Claw] = ++c; //Claw will open. } else if(vexRT[Btn7D] == 1) { motor[Claw] = --c; } else(vexRT[Btn7D] == 0 && vexRT[Btn7U] == 0); { motor[Claw] = c; } } else { } } }
I want to dispense water/cut-vegetables from a glass/bowl(240 ml) by turning it upside down. The screencast shows the idea: I can directly mount the clamp on a servo motor, but I think that will be a lot put a lot of downward force on the shaft. What will be a good mechanical arrangement to do this? Thanks! Ref: Servo MG996R with Metal Gears Cup:
It seems like humanoid robots is the hottest field of research in robotics. Government agencies are giving huge sums of money to private firms and labs to develop them (For example, Boston Dynamics has developed some really amazing humanoid robots, and some of them look scary!). My question is: The human body is highly inefficient. We can't run for very long times, have to learn for several months before we start walking, have only two hands, and are slow. Then why spend so much money and effort emulating such an inefficient thing? May be it is time that we took a step away from getting "inspired" by Nature, and build a man-made, highly efficient body. An example: Balancing a robot on two legs in very difficult. My question is, then why use some other method for locomotion, that is easier and more effective. A robot on two legs can run only so much faster. Why not come-up with some optimal shape, and then model your robots on it?
I have to determine speed and torque suitable for my combat robot. I've done some calculations and I need to know whether they're right or not (because they don't seem to be right). Suppose I have 10kg robot, I want to push other 10kg robot to the arena walls. Let's first assume opposite robot is not opposing my push. Suppose I want it to move it with acceleration of 10cm/sec/sec (also lemme know if acceleration assumed is suitable for robowar or not). Then force required will be, F = (total mass to be moved) * (acceleration) = (10+10)*(10e-2) = 2 N. Assume opposite robot is also pushing with me same force, in that case I will need another 2 N for opposing his push, Now taking into account frictional retardation, suppose I need another 2 N so total would be 6 N. Assuming velocity = 10cm/sec, then Power required will be 6 * velocity = 6*10e-2 = 600mW. For 12V motor even with 20% efficiency it would cause current of 250mA only. Also output power = speed X torque. Now I've heard that robowars usually require motor with high full load current otherwise they'll be damaged. So it looks like I am somewhere wrong in my calculations. If so, lemme know where,am I wrong where I assumed that opposing robot will cause just 2 N, because generally they'd be fitted with more high torque motors. But is it the only thing which goes wrong in above calculations? If opposite bot had to just exert 2N on my bot, are calculations right? PS: By side-shaft geared motors, it is meant that it is internally geared with horizontal shaft, right? Or "side-shaft" means something more?
I am currently working on a prototype device to automate cloth ironing task. So basically the mechanical design involves 4 manipulators and a separate manipulator arm with a hot pad (iron) to manipulate a given cloth to perform the ironing task. I understand its a very complicated task and I have come up with the following steps to potentially go about solving this problem. I would appreciate any kind input on the plausibility of this idea (mechanical and software challenges). For my first EB (Engineering build), I'm solely focusing on solving this problem for a T-shirt, probably extend this solution to other cloth types with modifications. From a pile of clothes, choose a certain cloth for ironing using background subtraction and colour based image segmentation (already programmed this part and its working!) Classify or recognise the chosen cloth into one of any known classes using CNN (Convoluted neural networks). I personally think this part is quite achievable given that I have enough training data to train my network. This is where the problem begins!. After classifying the given cloth, find coordinates of points of interest to manipulate the cloth in a certain 'predetermined' mechanical movement to facilitate the manipulation of cloth. What I mean by points of interest here is, for example, in the case of a T-shirt (see image below); the algorithm should localise the locations the following parts 1) Midpoint of T-shirt hem (see image for details) 2) Corner point of T-shirt hem (see image for details) 2) Start of yoke 3) End of the yoke I believe (through my mental visualisation) that these are the key points to place the T-shirt flat on the surface (a known STATE). When the T-shirt is flat on the surface, it could be stretched just enough to make the cloth wrinkle free to facilitate ironing through some kind of force sensing mechanism (Haven't really though about it, but I presume its not huge challenge). When the cloth is in a known STATE, proceed to ironing using path planning algorithms. I do understand that this is a very complex problem, just want to get some inputs to see if I am on the right path. I am really not sure if STEP 4 is mechanically achievable since non-rigid body manipulation is a huge problem for robots. I am quite confident on the software aspect of the project, I believe a large enough data set and well tested algorithm will help me identify the afore-mentioned points of interest in a test cloth. But I would like hear the mechanical challenges and plausibility of this project from design engineers standpoint. Hope I'm clear about my query. Any input is appreciated. Thank you so much and cheers :)
I want to design a linear controller for a quadcopter which is 6dof nonlinear. I have non-linear equations. But in order to design a linear controller, I need to find a linear state-space model of the vehicle. I skimmed bunch of articles and thesis without any result. Most of them have some approximations for separate parts of the model to linearize. I need the state-space as a form like below, $$\dot{x}=Ax(t)+Bu(t)$$ I couldn't find the $A$ and $B$ matrices. I made the small angle and hover condition approximations so that equations become simpler yet they are still non-linear. Non-simplified equations are as follow,
I am new in robotics. I am playing with a 4 axis robotic arm called uArm and I was wondering how to draw a circle with it. For this I mean the math of describing a circle in task space for the robot joints to achieve with the implementation of the algorithm in code. What will be the best approach? Any sources where to research? I tried to research around the web but I did not find anything useful. Thank you in advance
I am building a (fork of) the Lawnbot-400 robot found on Instructables.com and in the book Arduino Robotics. The author, while brilliant, tends to gloss over and omit details, one of those being how he actually connected the 2 recommended 12 V DC batteries to anything (I realize for the more experienced builder this is probably obvious though). Could you please provide guidance on the types of cables to use to do this? It seems like I have seen some which use little clamps on one end and / or which have a plastic terminal in the middle of the cable that can be disconnected while the other 2 ends are connected to the battery and motor controller. Do I need those features? The batteries I am using are Mighty Max 12 V 6 Ah high-rate gel series.
I want to develop an application for NAO robot using ROS (I already started to use ROS with NAO). But I don't know how to choose a scenario for my robot depending on ROS packages (SLAM, objects recognition...). My question is: is there any ROS packages for NAO applications that i can started with (turtlebot robot for example has the navigation stacks -SLAM- in ROS). Can anyone help me please??
In the environment, I have two robots and a couple of fixed obstacles. In order to detect obstacles I am using ultrasonic sensors. For robots, they need to detect each other and from which side other robot is coming (front, left, right or back), and do this during the motion. For this purpose, I cannot use PIR sensors, because robots are constantly moving. Also, I need to differentiate between moving robots and stationary obstacles, so ultrasonic sensors are also not helpful. So I came up with idea to somehow mark the robots with some property unique to environment, so when we detect object with that property, we know that it is a robot, and not another obstacle. One of ideas might be to put lasers on one robot, and on other robot put four laser sensors, one for each side, so we precisely can say from which side the other robot came from. Another option might be to use IR transmitters on one robot and four IR receivers on other robot? What do you suggest, is there any other type of sensors that might help?
I want to calculate 'r' I know , and the Position of(X, Z) But I don't know how to apply it in this robot.