instruction
stringlengths
40
28.9k
I am using arduino with FreeRTOS and a computer with patched xenomai linux. I am using python library called pyserial to communicate with arduino. Right now I am using simple servo motors. I want to be sure that whether the communication between the arduino and my main computer is real time or not. How can I check this, properly. I want hard real time communication between arduino and computer for balancing robot.
I create a simple simulink model using Matlab Function block to calculate angles of 4 dof robotic arm using Robotic toolbox Matlab while input is x, y and z values. I used forward kinematics to calculate angles. Model and error is shown in below images: My code is given below (which is put in matlab funcrion block): function [theta1,theta2,theta3,theta4]=invkin(px,py,pz) % code to find inverse kinematics solution for 4 dof rootic arm preach = [px py pz]; % reach point of end-effector theta1 = 0; theta2 = 0; theta3 = 0; theta4 = 0; % create links using D-H parameters L(1) = Link([ 0 0 0 pi/2 0], 'standard'); L(2) = Link([ 0 .15005 .4318 0 0], 'standard'); L(3) = Link([0 .0203 0 -pi/2 0], 'standard'); L(4) = Link([0 .4318 0 pi/2 0], 'standard'); %define link mass L(1).m = 4.43; L(2).m = 10.2; L(3).m = 4.8; L(4).m = 1.18; %define center of gravity L(1).r = [ 0 0 -0.08]; L(2).r = [ -0.216 0 0.026]; L(3).r = [ 0 0 0.216]; L(4).r = [ 0 0.02 0]; %define link inertial as a 6-element vector %interpreted in the order of [Ixx Iyy Izz Ixy Iyz Ixz] L(1).I = [ 0.195 0.195 0.026 0 0 0]; L(2).I = [ 0.588 1.886 1.470 0 0 0]; L(3).I = [ 0.324 0.324 0.017 0 0 0]; L(4).I = [ 3.83e-3 2.5e-3 3.83e-3 0 0 0]; % set limits for joints L(1).qlim=[deg2rad(-160) deg2rad(160)]; L(2).qlim=[deg2rad(-125) deg2rad(125)]; L(3).qlim=[deg2rad(-270) deg2rad(90)]; L(4).qlim=[deg2rad(-170) deg2rad(110)]; %build the robot model rob = SerialLink(L, 'name','Puma56'); qready = [0 -pi/4 pi/4 0]; % initial position of robot T1= transl(preach); % convert of reach point of 4x4 homogenous matrix [qreach,err,exitflag] = rob.ikcon(T1, qready); % find inverse kinematics with error %rob.fkine(qreach); theta1 = qreach(1,1); theta2 = qreach(1,2); theta3 = qreach(1,3); theta4 = qreach(1,4); end How to solve these errors. Thanks.
How can i make pyserial read and write at same time or at same program? Right now i only know how to either write or to read but not both simultaneiously. So how can i do it? I tried this simple code: int incomingByte = 0; void setup() { Serial.begin(9600); // opens serial port, sets data rate to 9600 bps } void loop() { incomingByte = Serial.read(); // read the incoming byte: Serial.print(" I received:"); Serial.println(incomingByte); } and python program is: import serial ser = serial.Serial('/dev/ttyACM0',9600) ser.write("333") ser.close() ser.open() data = ser.readline() print data ser.close()
I am planning to build a omnidirectional holonomic robot, and checking what I should use for the hardware, I saw many people using a Raspberry to compute most stuff, which in turn calls an Arduino to control the motors. But since the Arduino will have to drive some H-bridges, and the board computer is probably a lot more powerful, it seems to me that those drivers for the motors could be controlled by the Raspberry-ish board. What am I missing? Why so many people use both at the same time?
Suppose I have a Robot Arm with 3 joint(revolute)( 3 DOF) as shown below , and I'm given -The ranges of each joint q1,q2,q3 -The lengths of each link L1,L2,L3 -Load on the end effector [Fx Fy M] how can I calculate the max torque at each joint of the robot ? *image below describes the robot configuration. If I missed any details , mention it in a comment and I will add it . Thanks in advance
I came to know that matlab released robotics system toolbox in 2016 version but I'm using Matlab 2014b. At that time I installed Peter Corke Robotics toolbox matlab and start working on it. I develop few GUI and wrote others code too using the same Peter Corke Robotics toolbox. But now I want to install new official version of robotics toolbox. But my doubt is if I install new version then what happen to older one. Will I able to run my old codes (which used old toolbox) on matlab after installing new official version or it may vanish older one. Will matlab shows error in my codes or GUI after installing new version. I want to work with both toolbox. Thanks.
I am working on an estimation application for multiple robots, each of which uses measurements from various sources to calculate position and orientation data. For now, I am looking at about three sources: Orientation from an IMU Both position and orientation from a camera Position and orientation from relative measurements w.r.t another robot. Naturally, the most widely used technique that combines a model with measurements is something like an EKF. But in my case, the orientation data (IMU) comes from an autopilot, which already has an EKF onboard, and hence provides covariance estimates as well. The vision based estimation (both individual and relative) are computed through a few iterations of bundle adjustment, and the bundle adjustment solver also provides covariance of its resultant estimate. Finally, I am not really interested in utilizing a complex, non linear model for the robot, but mostly in just fusing the measurements to provide one final pose estimate. I have read about the concept of 'covariance intersection' in the context of Kalman filtering, which has been implemented in cooperative pose estimation using multiple data sources. I was wondering if I could get some advice as to whether Kalman filters etc. would still be applicable in my case, and if so, how to adapt them?
I am doing a project to build an autonomous lawn mower and I need to decide several type of sensor to complete the features. one of my features is the vehicles need to stop running when someone held it on the air. so i decide to use 9 DOF IMU for this features. as i know that 9 DOF IMU already have 3 axes magnetometer that can read the angular yaw position. so i just confused that do I still need to add another gyro sensor to make sure that my lawn mower do a turning 180deg? Thanks for any words on this.
I came across the paper(link given below) which discusses about bounded deviation joint path for straight line motion. Planning and Execution of Straight Line Manipulator Trajectories (RH Taylor) https://pdfs.semanticscholar.org/e01a/58608f4e68f31c7b9e7cdbddceae645727bb.pdf In this method, the assumption is that the maximum deviation happens at or near the midpoint between the start and end point. 1) Is this assumption true in all cases? 2) Even if the assumption may not be true, will resulting trajectory be a straight line if this method is used for trajectory planning? I hope someone shed some light on this. Thank you.
A I wanting to build a robotic hand, a big one at that. Similar to this one but in a much larger scale: http://www.instructables.com/id/DIY-Robotic-Hand-Controlled-by-a-Glove-and-Arduino/ Does it really matter how large the servo is, just as long as it has a power source? Or are there other things I need to know?
I'm working on an assignment where I need to derive IK for 5 DOF Youbot kuka robotic arm manipulatorofficial website. I'm using inverse kinematics decoupling and following a geometrical approach using simulink and matlab. The answer of the IK problem is 5 angles but when I apply those angles to the forward kinematics I'm receiving a different coordinates. Is that normal or am I supposed to get the exact coordinates? I'm using the following Matlab code: function [angles, gripperOut, solution] = IK(pos,toolangle,gripperIn) l=[0.147 0.155 0.135 0.081 0.137]; x = pos(1); y= pos(2); z = pos(3); surfangle = toolangle(2); theta1= atan2(y,x); % atan2(y/x) for theta1 s=z-l(2); % S = Zc - L2 r=sqrt(x.^2+y.^2)-l(1); D = (r^2 + s^2 -l(3)^2 -l(4)^2)/2*l(3)*l(4); %D = (pow2(r)+pow2(s) -pow2(l(3)) -pow2(l(4)))/2*l(3)*l(4); D2 = sqrt(1-D.^2); if D2<0 theta3 = 0; theta2 = 0; solution = 0; else theta3=atan2(D2,D); theta2=atan2(r,s)-atan2(l(4)*sin(theta3),l(3)+l(4)*cos(theta3)); solution = 1; end % R35 = subs(R35,[theta(1) theta(2) theta(3)],[theta1 theta2 theta3]); % theta4=atan2(R35(1,3),R35(3,3)); % theta5=atan2(R35(2,1),R35(2,2)); theta4 = surfangle-theta2-theta3; theta4 = atan2(sin(theta4),cos(theta4)); theta5 = toolangle(1); angles=[theta1 theta2 theta3 theta4 theta5]; if(solution == 0) angles = [0 0 0 0 0]; end gripperOut = gripperIn; end as shown I'm kind of fixing the last two angles for the tool so I can avoid using the subs command which is not supported for code generation. Any help would be very appreciated.
For a University project I have to use computer vision to detect small drones within 40 feet. I know there exists a pixycam for this purpose, but I was not happy with it, when I used it for CV. I have a normal digital camera which is 16 Megapixels (pic & video), which I don't use anymore. Before I dissect the camera, I was wondering if it is practically possible to train this digital camera for computer vision - detecting small flying drones. Any thoughts on this - using a digital camera for CV? Thanks
I am trying to make a robotic arm that mimics the movement of the user's arm, They way I need it to work is to detect nerve signals and send them to an arduino. The arduino would then have a servo motor mimic the movement of the user's arm when the arduino tells it how quickly to rotate and to what point to rotate, based on the user's input. Any ideas on how this can be done?
Position control versus torque control. What method is commonly used in industral manipulators
The outputs of most hobby servos is a spline. To mount a custom gear on the spline so that the servo turns the gear, how should it be best done? I see that some people just screw the gear into the threaded hole in the spline, but wouldn't this be inadequate? It'd just be the bottom of the screw touching the gear.
Suppose we have an $n$-DOF robot manipulator and let $q \in \mathbf{R}^n$ denotes a robot joint configuration. Then a singular configuration $q'$ is a configuration at which the Jacobian $J(q')$ does not have maximum rank. Let $S$ be the set of all singular configurations of a given robot Is there any (general) result regarding characterization of the set $S$? Any work discussing or answering questions such as "Is $S$ a manifold?", "Does $S$ contain only isolated points?", etc.? So far I have found quite a few work talking about classification of singular configurations. I think they still do not really answer my questions. Can anyone point me to some related stuff?
I am trying to implement a RL algorithm for an adaptive PID in a robot system. My doubt consists in the creation of the possible states in the problem. I mean, I understand quite well the problem when the possible states are door numbers, but I don't know know what to do with PID. Do I have to create a finite number of possible PID values in which the algorithm learns?
I have developed a 7DOF arm for a humanoid robot (see pic below for more details) I have implemented the IK using a closed form solution and of course I come up with eight solutions - each one is actually positioning the end effector at the right position and orientation (I implemented the method described in the paper: "Kinematics and Inverse Kinematics for the Humanoid Robot HUBO2+"). The question now is how to choose the right one, knowing the fact that the end effector will follow a trajectory. The idea is to compute iteratively the $[N, S, A, P]$ matrix that will be provided to the IK module. One solution I am thinking off is to choose the joint solution in the decision table that will minimize the given metrics: $$\sum_i (\theta_i^{current} - \theta_i^{next})^2$$ where $\theta_i^{current}$ represents the the current value of the $i^{th}$ joint and $\theta_i^{next}$ is the computed value in the decision table. Do you think it is the right approach or there are other methods out there to find the best joint solution from the decision table.
Before I buy any arduino products, I want to make sure that I can use this website: https://create.arduino.cc to create the code for the arduino. The thing is, I do not know if I can use this to export the code to the arduino. Can someone please tell me if I can use it to program an actual arduino instead of simulating one? By the way, the I do not have an actual computer, just a chromebook thinkpad (lenovo EDU series), so I can not use windows or apple software, it must be usable on the chrome web browser. Thanks.
Hey looking for recommendations. Right now I am using a sabertooth 2x25 motor controller for my drive train, receiving signals from a raspberry pi. From what I am seeing online - there is a lot of mixed reviews regarding this sort of setup. Everything seems to be running fine on my end, but I am curious to see what is the best optimal way to interface and program a large DC motor. Reasons why I decided to use a pi was because I needed ROS to be set up with my robot to perform autonomous tasks. I understand that the pi's clock is not powerful enough to give precise PWM output signals, while its running computations at the same time. Is there a board out there that could possibly handle both? Edit: Around 6A continuous load for the motors, 12 V
I am trying to implement an indirect/error state kalman filter following Quaternion kinematics for the error-state Kalman filter. However, instead of modelling the orientation and error in orientation I have chosen to utilize Madwick to estimate the orientation. The problem is that when I create the transition matrix from the first paper it expects the orientation error which it multiplies with the skew matrix of the measured acceleration and the accelerometer bias (page 40, equation 204). Since I have removed that from my states I can't use it, but then the measured acceleration is never considered (which I assume makes the filter worse). Is there any change I can make to the transition matrix so that it accounts for the acceleration?
I want to control a DVD drive's stepper motor with an arduino uno, but at the moment I'm not sure what kind of motor driver will work for that. I am currently looking at this one I have no idea about the specs of a motor driver, so any help would be greatly appreciated!
I'm trying to mount a rod on a micro servo's horn. The horn has several 1mm wide through holes. Does anyone know what sort of (tapping?) screws should be used for tapping 1mm plastic holes? Are M1.2 too small since the threads will be just 0.1mm deep? Are there canonical approaches to choosing the appropriate self-tapping screw size given hole diameter?
I have seen this example ( http://in.mathworks.com/matlabcentral/fileexchange/14932-3d-puma-robot-demo/content/puma3d.m) in file exchange and want to do similar thing with 4 dof rootic arm. I follow below steps. 1. Create a very simple 4 dof roots links using Solid Works and convert it into .stl file (ASCII) by using cad2matdemo.m file and store all data manually. 2. Alter the code according to my requirements. But I'm unable to create 3d model in Matlab GUI. My code is given below. function rob3d loaddata InitHome function InitHome % Use forward kinematics to place the robot in a specified % configuration. % Figure setup data, create a new figure for the GUI set(0,'Units','pixels') dim = get(0,'ScreenSize'); % fig_1 = figure('doublebuffer','on','Position',[0,35,dim(3)-200,dim(4)-110],... % 'MenuBar','none','Name',' 3D Puma Robot Graphical Demo',... % 'NumberTitle','off','CloseRequestFcn',@del_app); fig_1 = figure('doublebuffer','on','Position',[0,35,dim(3)-200,dim(4)-110],... 'MenuBar','figure','Name',' 3D Puma Robot Graphical Demo',... 'NumberTitle','off'); hold on; %light('Position',[-1 0 0]); light % add a default light daspect([1 1 1]) % Setting the aspect ratio view(135,25) xlabel('X'),ylabel('Y'),zlabel('Z'); title('Robot'); axis([-1000 1000 -1000 1000 -1000 1000]); plot3([-1500,1500],[-1500,-1500],[-1120,-1120],'k') plot3([-1500,-1500],[-1500,1500],[-1120,-1120],'k') plot3([-1500,-1500],[-1500,-1500],[-1120,1500],'k') plot3([-1500,-1500],[1500,1500],[-1120,1500],'k') plot3([-1500,1500],[-1500,-1500],[1500,1500],'k') plot3([-1500,-1500],[-1500,1500],[1500,1500],'k') s1 = getappdata(0,'Link1_data'); s2 = getappdata(0,'Link2_data'); s3 = getappdata(0,'Link3_data'); s4 = getappdata(0,'Link4_data'); s5 = getappdata(0,'Link5_data'); a2 = 300; a3 = 300; a4 = 300; d1 = 300; d2 = 50; d3 = 50; d4 = 50; %The 'home' position, for init. t1 = 0; t2 = 0; t3 = 0; t4 = 0; % Forward Kinematics % tmat(alpha, a, d, theta) T_01 = tmat(90, 0, d1, t1); T_12 = tmat(0, a2, d2, t2); T_23 = tmat(0, a3, d3, t3); T_34 = tmat(0, a4, d4, t4); % Each link fram to base frame transformation T_02 = T_01*T_12; T_03 = T_02*T_23; T_04 = T_03*T_34; % Actual vertex data of robot links Link1 = s1.V1; Link2 = (T_01*s2.V2')'; Link3 = (T_02*s3.V3')'; Link4 = (T_03*s4.V4')'; Link5 = (T_04*s5.V5')'; % points are no fun to watch, make it look 3d. L1 = patch('faces', s1.F1, 'vertices' ,Link1(:,1:3)); L2 = patch('faces', s2.F2, 'vertices' ,Link2(:,1:3)); L3 = patch('faces', s3.F3, 'vertices' ,Link3(:,1:3)); L4 = patch('faces', s4.F4, 'vertices' ,Link4(:,1:3)); L5 = patch('faces', s5.F5, 'vertices' ,Link5(:,1:3)); Tr = plot3(0,0,0,'b.'); % holder for trail paths setappdata(0,'patch_h',[L1,L2,L3,L4,L5,Tr]); % set(L1, 'facec', [0.717,0.116,0.123]); set(L1, 'EdgeColor','none'); set(L2, 'facec', [0.216,1,.583]); set(L2, 'EdgeColor','none'); set(L3, 'facec', [0.306,0.733,1]); set(L3, 'EdgeColor','none'); set(L4, 'facec', [1,0.542,0.493]); set(L4, 'EdgeColor','none'); set(L5, 'facec', [0.216,1,.583]); set(L5, 'EdgeColor','none'); % setappdata(0,'ThetaOld',[0,0,0,0]); % end function T = tmat(alpha, a, d, theta) % tmat(alpha, a, d, theta) (T-Matrix used in Robotics) % The homogeneous transformation called the "T-MATRIX" % as used in the Kinematic Equations for robotic type % systems (or equivalent). % % This is equation 3.6 in Craig's "Introduction to Robotics." % alpha, a, d, theta are the Denavit-Hartenberg parameters. % % (NOTE: ALL ANGLES MUST BE IN DEGREES.) % alpha = alpha*pi/180; %Note: alpha is in radians. theta = theta*pi/180; %Note: theta is in radians. c = cos(theta); s = sin(theta); ca = cos(alpha); sa = sin(alpha); T = [c -s*ca s*sa a*c; s c*ca -c*sa a*s; 0 sa ca d; 0 0 0 1]; end function del_app(varargin) delete(fig_1); end function loaddata % Loads all the link data from file linksdata.mat. % This data comes from a Pro/E 3D CAD model and was made with cad2matdemo.m % from the file exchange. All link data manually stored in linksdata.mat [linkdata1]=load('linksdata.mat','s1','s2','s3','s4','s5'); %Place the robot link 'data' in a storage area setappdata(0,'Link1_data',linkdata1.s1); setappdata(0,'Link2_data',linkdata1.s2); setappdata(0,'Link3_data',linkdata1.s3); setappdata(0,'Link4_data',linkdata1.s4); setappdata(0,'Link5_data',linkdata1.s5); end end Below figure shows desired and actual comes out model. All others thing, which may be useful (like linksdata file, sw model etc.) I shared on dropbox. Anybody can accesss from there. Dropox link: https://www.dropbox.com/sh/llwa0chsjuc1iju/AACrOTqCRBmDShGgJKpEVAlOa?dl=0 I want to know to connect two components in 3d model in Matla gui. Any study about this will be very helpful. Thanks.
In this paper by J. W. Burdick, the main result basically says that for a redundant manipulator, infinity solutions corresponding to one end-effector pose can be grouped into a finite set of smooth manifolds. But later in the paper, the author said only revolute jointed manipulators would be considered in the paper. Does this result (grouping of solutions into a finite set of manifolds) hold for redundant robots with prismatic joint(s) as well? Is there any significant difference in analysis and result when prismatic joints are included? So far I couldn't find anyone explicitly address the case of robots with prismatic joints yet. (I am not sure if this site or math.stackexchange.com would be the more appropriate place to post this question, though.)
Currently I working on a humanoid robot using inverted pendulum model and use LQR for walking stabilizer. Input u is Torque, State x is angle and angular velocity and y output is angle. I got the gain K value that meet my control specification (rise time, steady state error, etc) and the feedback control law like this So now I got u(torque) value, but I don't know how to use u(torque) to move my actuator (to control 2 angkle servo) because my servo only move using angle as the command input, not torque. Is there any step that must be done to convert torque to angle? Or something? Thank You for the help, need this for my final project.
I'm using OpenCV 3 in Python 2.7 on a Raspberry Pi 3. My project's aim is to build an autonomous lane departing robot that can detect the two lanes on its sides and continuously correct itself to remain within them. I want to achieve something like this project: https://www.youtube.com/watch?v=R_5XhnmDNz4 So far I've done the line detection part from the live video feed using both HoughLines and HoughLinesP. Here is a screenshot from my video feed and the outputs I'm getting so far: Till now my logic for detecting if the robot is going left or right is based on the (rho,theta) output of the HoughLines function. What I want to achieve is a more robust way of tracking how the robot is departing from the lanes. Some sort of central line marker that can be used to detect if the robot has moved away from the center. I'm still new to OpenCV and python and the part where I'm stuck at is converting the logic of detecting the lane departure of the robot. My understanding is that averaging the lines on the lanes into two lines (left and right lanes) and then working with their slopes should give some result. However, I've not been able to transform this into code. I'd appreciate any suggestions on ways to detect lane departure of the robot. Thanks!! :)
I have a pretty simple 2d manipulator which uses an arduino to control a payload weighing about 2kg. I want to implement a simple 2d path planner which takes as input: Current position, velocity, acceleration as 2d vectors Target position as 2d vector Bounding box as min/max x/y Maximum acceleration Maximum Jerk ... and outputs a path (function from time to (x,y)) which leads me to the target point as quickly as possible without violating the constraints. I want to specify the initial velocity and acceleration (not just position) because a movement instruction might interrupt a movement already underway. I want to specify the bounding box so that my payload doesn't hit any walls. I want to specify a maximum acceleration so that the inertia of my payload doesn't overwhelm my control authority. I want to specify a maximum jerk so that the springiness of my manipulator doesn't absorb some acceleration and whip it back at the end of travel. (I'm not sure whether I really care about maximum jerk except at the end of travel.) Now I don't think the math here is very hard. I don't want to reinvent the wheel, but neither do I want to spend a week learning how to use a complicated and overpowered general kinematics library. Is there a very simple library that I can plug into my arduino IDE that could accomplish this for me?
I am doing inverse dynamics in Matlab simmechanics in which position and orientation will be input to the end effector. But it shows error as below. Error originates in Mechanical block rob33/Subsystem2/Subsystem1/Custom Joint. The coordinate systems attached to this joint must lie on the prismatic axis (for 1 axis) or in the plane of the prismatic axes (for 2 axes). If joint has no prismatic axes, the attached coordinate systems must be collocated, within tolerances. Model is given below. In subsystem1, I used custom joint to give orientaion input. Susystem1 model and custom joint block parameters is given below figure. Parameters of link1-1 and parallel constaints is given in below figure. How do I correct this error?
I am new to electronics.the question is buzzing in my head since i first saw the H bridge configuration.please answer the question briefly
I have written a matlab code to solve IK for 6 DOF robotic arm. I use Newton method to numerically solve IK. Also i use Tikhonov regularization to hand bad conditioned Jacobians. It works fast and reliable when i want just to move the last link in certain position, when i use difference between X,Y,Z coordinates as condition to interrupt loop of Newtoon method. But when i want also to get into the right orientation (use difference between Euler angles as interrupt condition) it takes a very long time 2, 5, 10 minutes even more, regardless i want to get to the right coordinates also or not. So there are questions: How can i accelerate calculations, or why is it so slow? Can i use quternions instead of Euler angles? Quternions will increase dimension of Jacobian so it will not be a square matrix anymore and it will not be possible to use Tikhonov regularization that works so good. How often people use numerical methods to solve such things? I saw may examples of using analitycal solutions but not numerical. How to get sure that programm will find solution using Newton method and programm will find it in finite number of interations? UPD: here is my matlab code, that was rewritten using damped least squares method and quaternion. But still i have the same problem. In this code we move along trajectory, but we can remove it and try to jump directly to the destination point. %Derivative step for Jacobian composing step = 0.01; %Generalized coordinates for start position q_prev = [34; 89; 1; 1; 89; 0]; %Generalized coordinates for end position. To be sure we can reach it q_fin = [170; 150; 120; 156; 9; 158]; %get_coordinates() function returns 4x4 matrix of homogeneous transformations. It contains forward kinematics equations %Coordinates we are at A_forward = get_coordinates( q_prev ); %Coordinates we need to reach Dest = get_coordinates( q_fin ); %Getting rotation matrices for start and finish positions rotmat_curr = A_forward(1:3, 1:3); rotmat_dest = Dest(1:3, 1:3); %Matrix_to_quat() - is my analog of rotm2quat() function %Getting quternions for start and finish positions quat_curr = matrix_to_quat(rotmat_curr); quat_dest = matrix_to_quat(rotmat_dest); %Next steps are not inmportant, but i still comment them %Here i make a trajectory, and move along it with small steps. It was %needed for Newton's method but also useful if it is needed to move along a %real trajectory %X coordinate Y coordinate Z coordinate Quaternion coordinates_current = [ A_forward(1,4); A_forward(2,4); A_forward(3,4); quat_curr ]; coordinates_destination = [ Dest(1,4); Dest(2,4); Dest(3,4); quat_dest ]; %Coordinate step step_coord = 5; %Create table - trajectory distance = sqrt( (coordinates_destination(1) - coordinates_current(1)).^2 + (coordinates_destination(2) - coordinates_current(2)).^2 +(coordinates_destination(3) - coordinates_current(3)).^2 ); %Find out the number of trajectory points num_of_steps = floor(distance / step_coord); %Initialize trajectory table table_traj = zeros(7,(5*num_of_steps)); %Calculate steps size for each coordinate step_x = (coordinates_destination(1) - coordinates_current(1)) / num_of_steps; step_y = (coordinates_destination(2) - coordinates_current(2)) / num_of_steps; step_z = (coordinates_destination(3) - coordinates_current(3)) / num_of_steps; step_qw = (coordinates_destination(4) - coordinates_current(4)) / num_of_steps; step_qx = (coordinates_destination(5) - coordinates_current(5)) / num_of_steps; step_qy = (coordinates_destination(6) - coordinates_current(6)) / num_of_steps; step_qz = (coordinates_destination(7) - coordinates_current(7)) / num_of_steps; new_coord = coordinates_current; %Fill trajectory table for ind = 1:num_of_steps new_coord = new_coord + [step_x; step_y; step_z; step_qw; step_qx; step_qy; step_qz]; table_traj(:,ind) = new_coord; end; %Set lambda size. I found out that algorithm works better when lambda is %small lambda = 0.1; %In next steps i inialize Jacobian, build new destination matrix, calculate %orientation error at the first step. As orientation error i use max %element of quaternions difference. for ind = 1:num_of_steps J = zeros(7, 6); %quat_to_matrix() - analog of quat2rotm() function rot_matr = quat_to_matrix(table_traj(4:7, ind)); Dest = [ rot_matr, table_traj(1:3, ind); 0, 0, 0, 1 ]; %mat_to_coord_quat() function takes matrix of homogeneous %transformations and returns 7x1 vector of coorditaes %X Y Z and quaternion differ = mat_to_coord_quat(Dest) - mat_to_coord_quat(A_forward); error = max(abs(differ(4:6))); %Here is the algorithm. It works until we have coordinates and %orientation error less that was set while (abs(differ(1)) > 0.05) || (abs(differ(2)) > 0.05) || (abs(differ(3)) > 0.05) || error > 0.01 %first - calculating Jacobian for ind2 = 1:6 %for every coordinate %Calculating of partial derivatives: q_prev_m1 = q_prev; q_prev_m1(ind2) = q_prev_m1(ind2) - step; q_prev_p1 = q_prev; q_prev_p1(ind2) = q_prev_p1(ind2) + step; Fn1 = mat_to_coord_quat(get_coordinates(q_prev_m1)); % in q_prev vector ind1 element is one step smaller than in original q_prev Fn2 = mat_to_coord_quat(get_coordinates(q_prev_p1)); % in q_prev vector ind1 element is one step bigger than in original q_prev deltaF = Fn2 - Fn1; %delta functions vector deltaF = deltaF/(2*step); %devide by step to get partial derivatives for every function %composing Jacobian from column of partian derivatives J(:,ind2) = deltaF; end; %Next according to damped least squares method %calculate velosities along all coordinates A_forward = get_coordinates( q_prev ); differ = mat_to_coord_quat(Dest) - mat_to_coord_quat(A_forward); %calculating generalized coordinates velosities dq = (J.'*J + lambda * eye(6))\ J.' * differ; %integrate generalized coordinates velosities q_prev = q_prev + dq; %calculate max orientation error error = max(abs(differ(4:7))); end; end;
I am Planning to make a crawling bot which will follow a baby,having a pillow on the board.The Problem with the project is: Suppose a case when the baby falls, the bot should come exactly at the place where the baby's head will hit the ground? Any idea of how i can make the bot move to that point before the baby's head hit the ground ?
I was trying to run this tutorial in python code, but i got above error when i try to run it. My python converted program of given tutorial is: #!/usr/bin/env python import rospy from sensor_msgs.msg import JointState from std_msgs.msg import Header import tf import geometry_msgs.msg import math def talker(): pub = rospy.Publisher('joint_states', JointState, queue_size=1) rospy.init_node('state_publisher') broadcaster = tf.TransformBroadcaster() rate = rospy.Rate(30) # 10hz M_PI = 3.145 degree = M_PI/180; # robot state tilt = 0 tinc = degree swivel=0 angle=0 height=0 hinc=0.005 # message declarations t = geometry_msgs.msg.TransformStamped() hello_str = JointState() t.header.frame_id = "odom" t.child_frame_id = "axis" while not rospy.is_shutdown(): # update joint_state hello_str.header.stamp = rospy.Time.now() hello_str.name = ['swivel','tilt','periscope'] hello_str.position = swivel hello_str.velocity = tilt hello_str.effort = height t.header.stamp = rospy.Time.now() t.transform.translation.x = math.cos(angle)*2 t.transform.translation.y = math.sin(angle)*2 t.transform.translation.z = .7 #t.transform.rotation = tf.createQuaternionMsgFromYaw(angle+M_PI/2) t.transform.rotation = tf.transformations.quaternion_from_euler(0, 0, angle) # send the joint state and transform pub.publish(hello_str) broadcaster.sendTransform(t) # Create new robot state tilt += tinc if (tilt<-.5 or tilt>0): tinc *= -1 height += hinc if (height>.2 or height<0): hinc *= -1 swivel += degree angle += degree/4 rate.sleep() if __name__ == '__main__': try: talker() except rospy.ROSInterruptException: pass How can i remove this, i am using ros indigo. Even if i change that line with : broadcaster.sendTransform((0.5,1.0,0),tf.transformations.quaternion_from_euler(0, 0, angle+M_PI/2),rospy.Time.now(),"odom","axis") Its not working. Than it shows error: field position must be a list or tuple type [state_pub-2] process has died [pid 11654, exit co
I'm working with a robotic arm and needed to compute the jacobian matrix of it in order to send torque commands. THe arm has 6 joints all revolute. After calculating the jacobian matrix from the DH parameters provided in the datasheet, I noticed that the jacobian has dependency only from the first five joints. The sixth joint corresponds to the hand which only rotates over itself. My question here is: Can the jacobian have no dependency from a joint? On which cases can this happen? Thanks
I'm new to robotics and I've been reading some slides online regarding motion planning. Due to my lack of knowledge in mechanical engineering, I'm having a difficult time understanding what holonomic and non-holonomic constraints mean. I saw a post here and it says Holonomic system is when a robot can move in any direction in the configuration space, and Nonholonomic systems are systems where the velocities (magnitude and or direction) and other derivatives of the position are constraint. It seems like holonomic system differs from holonomic constraint. What is holonomic constraint and when do we need it? What is non-holonomic constraint and when do we need that? Thanks in advance.
I'm at the stage where I assembled a balancing robot and it's not maintaining a stable position. This is not a surprise I just started testing last night. My code is here, views of the device are here. Briefly, it's based on a teensy 3.2, a brushless motor controller that receives I2C commands that drives brushless gimbal motors. It uses an MPU9250 for angle measurement. I'm using PID control, and I made a tkinter-based interface that allows me to send it P/I/D values for realtime testing. I plan on implementing a bluetooth based serial to reduce wires going to the device. At this stage I'm not asking people for specific help on debugging what's going wrong, I'm asking about a general strategy for testing. I have used the RAM on the teensy before to record PID response time and then send that data to pyplot, which was very informative before. I was wondering if it would be a good idea to detach my wheels and mount the motors to a rigid pedestal - and to do some PID tuning using that system to tweak the wobbliness/stability. My reasoning being "hey if I can't get this thing to stay upright when it's rigidly mounted to the bench, why would it work when it's got wheels on it?" Are there any comments on this strategy, and would anyone want to offer other ways to go at the problem at this point? Yes, I've read the many posts on PID tuning, I'll follow them as best I can. I can post pictures and other code examples but newbies only get to put two links into the OP.
I know at least 3 different solutions to inverse kinematics problem. They are pseudo inverse jacobian, cyclic coordinate descent and ANFIS networks. I would like to know their advantages and disadvantages comparing to each other.
I have a slam algorithm that outputs at around 30Hz, an implementation of ORBSLAM2. https://github.com/raulmur/ORB_SLAM2 I am reading this into a renderer that expects 60+ Hz. Because my sample speed is low, I am getting 'shuddering' in the display, as the renderer adds linear 'steps' between the samples. For example, I am seeing a result like: time sample result 1 20 20 2 n/a 20 3 n/a 20 4 22 22 5 n/a 22 6 n/a 22 7 24 24 8 n/a 24 9 n/a 24 What i need to do, is predict the next sample, and fill in the gaps, so to speak, so that I end up with something like: time sample result 1 20 20 2 n/a 20.66 3 n/a 21.33 4 22 22 5 n/a 22.66 6 n/a 23.66 7 24 24 8 n/a 24.33 9 n/a 25.66 I need to predict 6DOF, for which i have translation xyz, and a quaternion xyzw. But if I can find a way to predict even one axis, for a start, that would be great. I have the data outputting as xyz and xyzw, at around 30Hz. I also have an xsens IMU, which i am using to pass in an initial rotation value. Can i use a predictive filter for this purpose? Is a kalman suitable? I am looking at: https://github.com/simondlevy/TinyEKF and a Bayes filter: http://bayesclasses.sourceforge.net/Bayes++.html But am a little out of my depth. Thank you, please ask if I have not made sense!
I am doing to research about why you cannot use IMU acceleration the integrate to get velocity. Everyone says you cannot do that due to there being error but what error is this exactly and where does it come from?
I have to study the controllability of the kinematic model of a Cycab: $\dot{q}=g_1(q)v+g_2(q)\omega_R+g_3(q)\omega_L$ where $\dot{q}=\begin{bmatrix}\dot{x}\\\dot{y}\\\dot{\theta}\\\dot{\gamma}\\\dot{\phi}\end{bmatrix}$ $g_1(q)=\begin{bmatrix}cos(\theta+\gamma)\\sin(\theta+\gamma)\\\frac{sin(\phi-\gamma)}{lcos(\phi)}\\0\\0\end{bmatrix}$ $g_2(q)=\begin{bmatrix}0\\0\\0\\1\\0\end{bmatrix}$ $g_3(q)=\begin{bmatrix}0\\0\\0\\0\\1\end{bmatrix}$ with $x$ and $y$ the Cartesian coordinates of the midpoint of the rear segment joining the two rear wheels and $\theta$ the direction of the midpoints of the two segments joining the wheels centers with respect to the axis $x$. So, I have to study the accessibility distribution $\{g_1,g_2,g_3.[g_1,g_2],[g_2,[g_1,g_2]],...\}$ so I computed $[g_1,g_2]=\begin{bmatrix}sin(\theta+\gamma)\\-cos(\theta+\gamma)\\\frac{cos(\phi-\gamma)}{lcos(\phi)}\\0\\0\end{bmatrix}$ $[g_2,[g_1,g_2]]=\begin{bmatrix}-cos(\theta+\gamma)\\-sin(\theta+\gamma)\\\frac{-sin(\phi-\gamma)}{lcos(\phi)}\\0\\0\end{bmatrix}$ so the rank of $[g_1,g_2,g_3,[g_1,g_2],[g_2,[g_1,g_2]]]$ is equal to 5 so we can say that the system is controllable. Now, is it correct to study the accessibility distribution without using the vector field $g_3$, so is it correct to say that the system is controllable without using the vector field $g_3$?
I have archlinux indigo ros. My probelm is that when I type in terminal: $ gzclient Segmentation fault (core dumped) ... $ roslaunch turtlebot_gazebo turtlebot_world.launch ... logging to /home/islam/.ros/log/0f56780c-18b0-11e7-966d-642737d9d3b9/roslaunch-CatchMe-11800.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://localhost:44337/ SUMMARY ======== PARAMETERS * /bumper2pointcloud/pointcloud_radius: 0.24 * /cmd_vel_mux/yaml_cfg_file: /opt/ros/indigo/s... * /depthimage_to_laserscan/output_frame_id: /camera_depth_frame * /depthimage_to_laserscan/range_min: 0.45 * /depthimage_to_laserscan/scan_height: 10 * /robot_description: <?xml version="1.... * /robot_state_publisher/publish_frequency: 30.0 * /rosdistro: indigo * /rosversion: 1.11.20 * /use_sim_time: True NODES / bumper2pointcloud (nodelet/nodelet) cmd_vel_mux (nodelet/nodelet) depthimage_to_laserscan (nodelet/nodelet) gazebo (gazebo_ros/gzserver) gazebo_gui (gazebo_ros/gzclient) laserscan_nodelet_manager (nodelet/nodelet) mobile_base_nodelet_manager (nodelet/nodelet) robot_state_publisher (robot_state_publisher/robot_state_publisher) spawn_turtlebot_model (gazebo_ros/spawn_model) auto-starting new master process[master]: started with pid [11824] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 0f56780c-18b0-11e7-966d-642737d9d3b9 process[rosout-1]: started with pid [11837] started core service [/rosout] process[gazebo-2]: started with pid [11852] process[gazebo_gui-3]: started with pid [11861] process[spawn_turtlebot_model-4]: started with pid [11868] process[mobile_base_nodelet_manager-5]: started with pid [11873] process[cmd_vel_mux-6]: started with pid [11878] process[bumper2pointcloud-7]: started with pid [11880] process[robot_state_publisher-8]: started with pid [11882] process[laserscan_nodelet_manager-9]: started with pid [11896] process[depthimage_to_laserscan-10]: started with pid [11905] /opt/ros/indigo/lib/gazebo_ros/gzclient: line 25: 11916 Segmentation fault (core dumped) GAZEBO_MASTER_URI="$desired_master_uri" gzclient $final [gazebo_gui-3] process has died [pid 11861, exit code 139, cmd /opt/ros/indigo/lib/gazebo_ros/gzclient __name:=gazebo_gui __log:=/home/islam/.ros/log/0f56780c-18b0-11e7-966d-642737d9d3b9/gazebo_gui-3.log]. log file: /home/islam/.ros/log/0f56780c-18b0-11e7-966d-642737d9d3b9/gazebo_gui-3*.log /opt/ros/indigo/lib/gazebo_ros/gzserver: line 30: 11978 Segmentation fault (core dumped) GAZEBO_MASTER_URI="$desired_master_uri" gzserver $final [gazebo-2] process has died [pid 11852, exit code 139, cmd /opt/ros/indigo/lib/gazebo_ros/gzserver -e ode /opt/ros/indigo/share/turtlebot_gazebo/worlds/playground.world __name:=gazebo __log:=/home/islam/.ros/log/0f56780c-18b0-11e7-966d-642737d9d3b9/gazebo-2.log]. log file: /home/islam/.ros/log/0f56780c-18b0-11e7-966d-642737d9d3b9/gazebo-2*.log and here is the output of debug: Reading symbols from gzserver...(no debugging symbols found)...done. [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib/libthread_db.so.1". [New Thread 0x7fffd4e6d700 (LWP 16809)] [New Thread 0x7fffd038d700 (LWP 16810)] Thread 1 "gzserver" received signal SIGSEGV, Segmentation fault. 0x00007ffff2838390 in gazebo::event::Connection::Id() const () from /usr/lib/libgazebo_common.so.8 My GPU is intel HD3000 , core i5 INSPIRON N5110. I have noted that intel hd3000 works with gazebo Thanks in advance.
Can a Compact 4 Node Raspberry Pi 3 Cluster be enough powerful to elaborate videostreaming input data in real time for a drone? Thank you for any answer
I have raw GPS data (see end of the question for details if you think the protocols are relevant), and I need to extract/compute ephemeris and pseudoranges (my goal is to replace the recursive least squares that the receiver solves to compute the position with a home-brewed method based on sensor fusion). Concerning the ephemeris, I have access to the 3 first subframes of each of the 25 frames of the GPS message. Documentations/books/etc that I have found just vaguely mention that the first subframe contains clock data, and the two others contain ephemeris data. However, none of them precisely says what the words in these subframes are, and how I can use them to compute orbital information (I'm assuming that I only want satellite positions?). Can anyone give me pointers to some references on how to do this? Or (even better), is there any open-source code that already implements this? I really appreciate your help. Details on the data: They have been transmitted by a Ublox EVK-7p using the UBX RXM-EPH and RXM-RAW protocols. RXM-EPH: satellite ID (4 byes), HOW word (4 bytes), followed by three 32-byte arrays corresponding to subrames 1 to 3 of the GPS message. RXM-RAW: time of week, week, num of satellites, reserved (?) , carrier phase, pseudo-range, doppler, and so on.... Update: I found this which seems to answer most of my questions. It is from 1995 though.
Hope this is the right place to ask.. I'm a Mechatronic Engineering student and am having difficulty programming the mill to cut diagonal edges of contours. I've worked through this tutorial which I understand. The problem I'm having is milling around the outside of the diagonal edge, not on it. I hope that makes sense. I'm unsure about the tool positioning at the start/end of cut, for example. Can someone explain how to do it? Thanks.
Most servos and steppers I've worked with are able to spin manually when no power is applied, but I've come across a couple which are stuck in their position. I'm not sure if they are just very difficult to spin manually, or if they would break if forced. For example, I have this stepper and servo which seem impossible to turn manually. Is there a specification that would tell you whether or not the motor can spin without power?
I am currently working on a line follower buggy and have managed to tune the PID constants​ manually. The buggy follows the line at a moderate speed. I will now like to take things further and learn new things as well. I read about Q-learning and will like to ask if what I am about to implement is on the right track. I have chosen: Three states: last three positions of line sensors Three rewards: middle position, end of track and less wobbling (measured with gyroscope). Four actions: $Kp$, $Ki$, $Kd$, and Max speed. The computation will be made on a PC as the robot is wirelessly connected. Am I on the right track? How do I make the 3 constants have "states" because as I understand, the actions have to be non-analog ? Do I create a range of numbers close to the constants I have now and the Q-learning decides which is best ? (It's inefficient to just try random numbers)
Using a PID with encoders, I can make the robot move straight but there is a 0.5 degrees drift and it eventually hits a wall so I need to adjust to center it between the two walls. I have a sensor on each side that gives me the distance from the wall, so What's the best approach to make the robot adjust when it comes to close to one wall?
We need a SLAM system for mobile platform which will be used in on-site construction. We are to build a 3D map from clouds of points and need localization in it. We don’t need fast localization, but we need high accuracy of localization, error of localization should be around 1mm at a distance up to 5 meters. Which sensor will suit for this - maybe it will be enough only a stereo vision (can it give us required accuracy?), or only LiDAR sensor (what’s about accuracy and usage in direct sunlight?) or we need to use something different and combine several type of sensors? Can you advise the best solution for this task?
I was having a chat with a robotic expert recently. The guy told me that for an arm with motor drives running at 5kHz your are to set the control so that someone can grasp the end effector and move it around (something like the usual teach mode for an arm). Anyone knows what kind of control this expert was mentioning? What setup this involves? Any doc on this subject? Any input is welcome...
For a four-legged robot (like Big Dog or the one shown here) how are the joint angles and "feet" position related to the body's frame in the world/inertial frame? For example, if I know the body's position and orientation in the world frame, and the joint angles, how do I derive the relationship that tells me where the robots "feet" are? For simplification, if I assume the legs can be represented as a planar 3R manipulator (where the end effector is the foot), it's easy enough to derive the relationship between the end effector and the angles. But the "base" is the robot's body, which will change position and orientation when the joint angles change. So do I have to find the matrix which relates the body to the world frame, then find the position of the foot with respect to the world? Or am I thinking of this the wrong way?
I have a skid steer drive train with an encoder on each side of the robot along with a gyro to measure the angle of the robot. The width of the robot is 26 inches. Using the encoders I would like to set up an x and y coordinate grid to know the pose of the robot and set up the system to go through waypoints to reach a destination. The robot has a starting reference point and I would like to go to another point in the area. Anybody have an idea of how to approach this?
We are building a plane which should fly autonomously, to achieve best results in the enviroment we are flying in (many Hills and Mountains) we need some sort of reliable height readings. We already get some information using a barometer and the google maps elevation API but especially for landing and low alltitude flights we need a precise height. Most sensors have a bad performance on grass (SRF-08 achieves about 1m; Lidar Lite about 3-4m). Is there some (not to expensive) sensor which can measure distances to at least 50m with a precision of about 20cm? Which method would be suitable for this application?
I am looking at the possibility of using LIDAR to do obstacle avoidance for a robotics project I am working on but the project involves avoiding a chain link fence. Has anyone used LIDAR to detect fences and if so how well did it work? Thanks for your help.
I'm stuck at computing forward kinematics equations. I have configuration of the first two joints like on the following image: Transformation from the origin to the first joint basis is trivial: just translation by $\vec{OO_{1}}$. The second transform from joint 1 to joint 2 basis makes me nervous throughout this day. First of all it is a rotation around $Z$ axis. So rotational part will look like this: $ R_{12}= \begin{pmatrix} cos(q_{1}) & -sin(q_{1}) & 0\\ sin(q_{1}) & cos(q_{1}) & 0\\ 0 & 0 & 1 \end{pmatrix} $ Problems are all about the translation part. I see two approaches here. Since angle between $\vec{O_{1}O_{2}}$ and plane $X_{1}O_{1}Y_{1}$ is constant because rotation is performed around $Z$ axis, length of projection of $\vec{O_{1}O_{2}}$ onto $X_{1}O_{1}Y_{1}$ is constant. Here it is: $\vec{v} = O_{2} - O_{1} = \begin{pmatrix} v_{x}\\ v_{y}\\ v_{z} \end{pmatrix} $ Its' projection onto $X_{1}O_{1}Y_{1}$ is $\vec{v_{p}} = \begin{pmatrix} v_{x}\\ v_{y}\\ 0 \end{pmatrix}$ and it's magnitude is $m=\sqrt{v_{x}^2 + v_{y}^2}=const$. Now let's look at what happens after rotation: So the translation matrix looks like: $ S_{12}= \begin{pmatrix} m\cdot cos(\alpha+q_{1})\\ m\cdot sin(\alpha+q_{1})\\ v_{z} \end{pmatrix} $ And full transformation matrix from joint 1 to joint 2 basis is: $ T_{12}= \begin{pmatrix} R_{12} & S_{12}\\ 0 & 1 \end{pmatrix} $ Unfortunately it gives me wrong results even when $q_{1}=0$. Can not see where my reasoning is wrong. Second approach is more straightforward. Being able to calculate $\vec{O_{1}O_{2}}$ in initial configuration makes it possible just to rotate this vector by $q_{1}$ around $Z$ axis and this has to be our translation vector. Nevertheless I can't make it work. $R_{z}= \begin{pmatrix} cos(q_{1}) & -sin(q_{1}) & 0\\ sin(q_{1}) & cos(q_{1}) & 0\\ 0 & 0 & 1 \end{pmatrix}\\ \vec{v} = O_{2} - O_{1} = \begin{pmatrix} v_{x}\\ v_{y}\\ v_{z} \end{pmatrix}\\ R_{z}\vec{v}= \begin{pmatrix} v_{x}cos(q_{1})-v_{y}sin(q_{1})\\ v_{x}sin(q_{1})+v_{y}cos(q_{1})\\ v_{z} \end{pmatrix}\\ T_{12}= \begin{pmatrix} R_{12} & R_{z}\vec{v}\\ 0 & 1 \end{pmatrix} $ It works until I rotate the first joint(i.e. only when $q_{1}=0$). Under works I mean "calculates position of joint 2 origin right". This is done by multiplying transformation matrix $T_{02} = T_{01}T_{12}$ by $\begin{pmatrix} 0 & 0 & 0 & 1\end{pmatrix}^{T}$
I am trying to calculate the max lifting capability of the OWI 535 arm. The robot has 3 DOF and 3 three 3 DC motor with the robot power source delivers an Operating Voltage current of 3 Volts. The motors have a Stall Torque 60g-cm. I would like to know how to do the math to calculate the lifting capacity. The Robot has a wrist motion of 120 degrees, an extensive elbow range of 300 degrees, base rotation of 270 degrees, base motion of 180 degrees, vertical reach of 15 inches, horizontal reach of 12.6 inches, and lifting capacity of 100g Again, I am attempting to use the robot information on the Society of Robots http://www.societyofrobots.com/robot_arm_calculator.shtml I have middle and high school student that want to be able to calculate this information.
i used ATmega328p chip to make a car, but its the car is always controlled by me. my question is in robotic arms , the arm moves using the kinematics that i put, what chip and programming language should i use? i know of matlab which works with matrices, but what chip works with it?
I almost finished my first quad and first time it just flipped(the back over the front 360 degrees)...the motors and props are installed correctly..i triple checked but i'm still a noob in this...my quad specs are q330 frame with naza m lite (with gps),hobbywing 30a esc,rs 2205 motors ,flysky fs i6 with ia6b,3s battery ... maybe someone can help with my problem...i don't want to loose to many props or even worse...thanks
I'm a programmer. I had a tiny amount of experience building robots in college a few years ago, but haven't done anything since. I'd like to build a robot that can move around my house, pick items up and put them down... IE, a robot that could get items out of the dishwasher and put them away. I was thinking it would have a square base with a wheel at each corner for moving around the floor, a scissor jack so it can adjust its height (I'm hoping to be able to be able to move between 1 foot tall and 7 feet tall), and then a scissor jack at the top for moving a gripper towards or away from it (between a couple of inches and 3 feet away). Are two scissor jacks actually what I want? It seems like 90+% of robots with grippers go with arms instead, but it seems to me that those are more complicated and would be less precise. I have no robotic experience beyond little car like things - I've never built one with any sort of gripper or actuator, so advise would be much appreciated. (Huh - there's not even a tag for scissor or jack... what do you guys call them?) Also... do people normally build these parts themselves, or do they buy them? I've searched around but I can't find any scissor jack kit or anything like that. If I need to build it myself... how would I do that? What would I build it out of?
I'd like to implement those algorithms by using ROS packages to solve one way the SLAM problem. I know that gmapping, Rviz, slam_gmapping and robot_pose_ekf (for extended kalman filter) could be useful packages, but I'm kind of lost. I don't ask a tutorial because the next days I'm going to start studying deeper this subject, but I need orientation in the procedure. "a possible way to implement RANSAC algorithm" http://pointclouds.org/documentation/... Note: I'm planning to do something like this indoors. https://www.youtube.com/watch?v=17W8dkzkvWA. I'm working on ubuntu. I have the "kinect" (xbox360), and for now I don't know what kind of cheap 2WD robotic platform choose in pages as amazon, robotshop and ebay (although Canakit 2WD, DFrobot 2WD and alphabot seems to be a good option). More than anything, I need orientation about how to mix everything to solve the SLAM problem with a 2WD robotic platform, the kinect and ROS packages :) Thanks in advance
I am using the MPU6050 in conjunction with an Arduino and Jeff Rowberg's i2cdev library, and my project requires that the gyro rate outputs be more precise than the default setting, which is 1/16.4 of a degree (+/-2000 deg/sec range). The gyro outputs can be changed with mpu.setFullScaleGyroRange(uint8_t range) for which I passed in MPU6050_GYO_FS_500 for range to get a higher precision In this project, I also need the YPR position, which I obtain through mpu.dmpGetQuaternion(&q, fifoBuffer) and mpu.dmpGetGravity(&gravity, &q) and mpu.dmpGetYawPitchRoll(ypr, &q, &gravity). The problem is with the new gyro output range, the YPR position changes drastically when the MPU is being rotated and slowly catches up again once the MPU is held still. I think there is an error in the filter that combines the gyro and accel data that is making the gyro to sensitive. Maybe the DMP is dividing the GYRO rate data by the default sensitivity factor (16.4) when it should be dividing by the new one (131)? How can I get accurate YPR readings without delay? Here is a screenshot of the data. The axis aren't labeled, but the x-axis represents about 16 seconds of time. The blue line is the gyro rate data, and the pink line is the roll position of the MPU. The graph shows two rotations of the MPU.
When using Paden-Kahan Sub-problems to solve the inverse kinematics of manipulators, 'r' is described as the intersection point between the first and second twist axes. But how is this r actually found? Referencing Murray (here), on page 122.
I've been working on this Arduino-MPU6050 quadcopter for a while now, and it looks like it's close to being finished. I have programmed it in rate mode, so the PID's control the rotational velocity. Once I get those perfected I will write an outer set of positional PID's to control the rate ones. But anyway, I'm still having issues getting the drone perfectly stable, and it drifts around more than it should. Below you can see a screenshot of a program I wrote, which shows the angular velocity in blue and the angular position in pink: As you can see, the quadcopter is wobbling around pretty much randomly, and since there aren't any steady oscillations I'm guessing my PID is okay and that the instability is from something physical like vibration. Is this a reasonable assumption? I am looking for any suggestions/possible explanations for this instability, and guidance on what I should do next. Invest in ant-vibration foam? Revise my PID? I should also mention that I have not flown the quadcopter. I have it suspended using ropes attached to the legs of an upside down chair so that I can test one axis at a time. General List of thing's I've Tried: Modified the Arduino Servo.h to update the ESC's more frequently. Changed the precision of my gyro from default 1/16.4 deg to 1/65.5 deg. Balanced the props with bits of electrical tape. Adjusted PID gains. PID sample rate set to 3ms (333hz). EDIT Here is a snippet of my PID: if (millis() - updateTimerPID >= sampleMillisPID) { if (thrust <= 1200) { // Don't turn on PID until sufficient throttle is reached. NWPower = thrust; // NEPower = thrust; // If PID not activated, set motors to base-throttle. SWPower = thrust; // SEPower = thrust; // inAutoRoll = false; // Roll-Axis PID not activated. inAutoPitch = false; // Pitch-Axis PID not activated. inAutoYaw = false; // Yaw-Axis PID not activated. I_rollRate = 0; // Reset roll rate integral term. I_pitchRate = 0; // Reset pitch rate integral term. I_yawRate = 0; // Reset yaw rate integral term. } else { //PID is active - adjust Roll/Pitch/Yaw. adjustRollRate(); // adjustPitchRate(); // adjustYawRate(); } updateTimerPID = millis(); // Reset PID timer. } void adjustRollRate () { float offset = requestedRollRate - rollRate; //How far off from the wanted roll angular velocity. I_rollRate += KrI * offset; // Adjust roll rate integral term. if(!inAutoRoll) { // Did the PID just turn on? lastRollRateOffset = offset; // The previous offset is set to the current one (D-term is now zero for this instance). inAutoRoll = true; // The PID is now on. } float adjust = (KrP * offset) + I_rollRate + (KrD * (offset -lastRollRateOffset)); // Motor power adjust value. NWPower += adjust; // NEPower -= adjust; // Adjust the motor powers. SWPower += adjust; // SEPower -= adjust; // if (NWPower > maxOutPID)NWPower = maxOutPID; else if (NWPower < minOutPID)NWPower = minOutPID; if (NEPower > maxOutPID)NEPower = maxOutPID; else if (NEPower < minOutPID)NEPower = minOutPID; if (SWPower > maxOutPID)SWPower = maxOutPID; else if (SWPower < minOutPID)SWPower = minOutPID; if (SEPower > maxOutPID)SEPower = maxOutPID; else if (SEPower < minOutPID)SEPower = minOutPID; lastRollRateOffset = offset; // Remember the offset for next time. }
I'm looking for a good breakdown and explanation of Google's 'Tango' AR platform. Specifically how the hardware works together to generate depth maps and the SDK's use of it. I know the hardware composes of a fisheye lens camera and an RGB-I camera. I am only familiar with stereo vision with identical cameras and disparity maps, I am thinking the different lenses and camera elements make it easier to distinguish variations in the environment but must have some very special (and proprietary) algorithms. However, there must actually some special hardware and dedicated chipsets for processing the depth map to take the burden off the CPU/GPU? Also, for the AR software implementation, I assume the SDK has some GPU utilization built into it like OpenCL or CUDA (but specific for the Adreno GPU). Does it simply use OpenCL (this is supported by the Adreno GPU) or does it have something proprietary from Google similar to CUDA for the nVidia chipsets? Basis for the question - I work with OpenCV some and am experimenting with stereo vision applications, but would like to move on to developing apps for specialized hardware and this sounds like the right (maybe only?) platform.
First of all sorry for the confusing question title. I am also confused about the concept. I have implemented a quadcopter and its controller. Controller finds the rotor speeds based on the position and yaw angle references. The thing that I don't understand is, let's say I want the vehicle to climb 5m up and then 5m left. At this point, I think I need to create a vector containing the reference values. By the way, the model is discretized by some deltaT time interval so the reference vector. This does not coincide and well behave according to the dynamics of the vehicle. Let's say, the reference input for altitude is 5m until 5sec and 0 for [5,10]sec. But it is not guaranteed that the vehicle will reach to 5m altitude in 5sec. Thus, my intention is, that reference vector shouldn't rely on the time. Therefore, my perception is to use some if condition to check if the vehicle is reached for the first waypoint and then, register the next one. Which is a simple if-else statement? This makes me think what is the mathematical or analytical background in this. Is it just the following of the line between two waypoints by geometrical analysis like Line of Sight guidance law things. Can you give me some insight about the concept which makes me confused?
(full disclosure: this is homework) I have a twist expressed in frame B: $\zeta_b = \begin{bmatrix}1\\3\\-2\\0\\-2\\4\end{bmatrix}$ And a general transformation matrix: $g_{ab} = \begin{bmatrix}-0.4749 & 0.8160 & 0.3294 & -1.5\\-0.2261 & -0.4749 & -0.8505 & -1\\-0.8505 & -0.3294 & 0.4100 & 2\\0 & 0 & 0 & 1\end{bmatrix}$ How would I go about converting my twist into frame A? I suspect I would break $\zeta_b$ into its component $\omega$ and $v$ vectors using the knowledge that: $\zeta = \begin{bmatrix}v \\ \omega\end{bmatrix} = \begin{bmatrix}-\omega \times q \\ \omega\end{bmatrix}$ (where $q$ is a point on $\omega$) But I am unsure.
As we know, we can calculate altitude from barometer readings, and a UAV can hold height by referring to these data. However the real air pressure is varying with many conditions. These variance will cause the UAV's height unstable. Is it possible to avoid height drifting due to pressure changes using sensor fusion without GPS?
I have a two port powerbank capable of supplying a maximum of 5V 2.1A I'm using it to power an Arduino and a L293D IC connected to two DC motors which have the following specifications: Working voltage : 3V to 9V No-load current = 60 mA, Stall current = 700 mA The setup is not working and I have made the following observations 1) The voltage across the output terminals of the powerbank is read as 4.9V (when the arduino is not powered either from the other port of the power bank or another power supply all together) 2) The voltage across the output terminals of the powerbank is read as 4.0V when the Arduino is powered either from the other port of the power bank or my laptop This voltage is given to the L293D Pin 8 (That is meant to be given 12V). A 12V to 5V buck provides the 5V to the IC itself (this is part of the motor driver board) 3) When only one motor is switched on, the voltage provided to that motor is 2.8V. The motor rotates only when given some manual force on the axle. 4) When both motors are switched on, the voltage provided to each motor is around 0.5V. Both the motors don't rotate at all. 5) When only one motor is switched on, the voltage reads around 0.2V until given manually rotating the axle after which it picks up speed. I couldn't measure the current (or rather measured current to be 0A) anywhere as the motors don't rotate at all if I connect the ammeter. I understand that the power bank supplies power only when the load connected to it demands the power (due to smart sensing). What should I do so that the power bank continually provides 5V and this 5V is delivered to each motor?
I am Carlos Barreiro and I am studying a Robotics Master. Now I am working with my Thesis. The project consists in the teleoperation of the a robot with Kinect (model 1). More specifically, I am working with the humanoid robot Pepper which is developed by Aldebaran (Softbank). For the skeleton tracking in real time I am using the Kinect for Windows SDK (v1.8). Because with the kinect 1 I can’t obtain the Skeleton tracking in Linux :-(. It is not a problem, I think that I can continue with Windows for a few time. For this project I use Python because I have more experience with this language and it is easier communicate with Naoqi (the robot middleware). For the communication with Kinect I am using the library PyKinect, it is a wrapper from the C++ Windows SDK. My problem is calculating the rotation angle for the arm actuators. The robot needs translate the positions of the points of the skeleton to an angle for each motor (Pitch Roll or Yaw) like the bellow picture. So I need to get the shoulder pitch, should roll, elbow roll and elbow Yaw. The skeleton that I got from Kinet gives me the 3d point and the quaternions of each joint (shoulder, elbow, …) I am trying different ways but the result could be improved a lot of. CASE A: I am using the joint 3D position for calculate the angle between two points. For example : The shoulder robotics has two actuators: the shoulder Roll and the Shoulder Pitch. In this function takes I pass two arguments, in this case the 3D position point of the shoulder and the elbow and then I calculate the angle between each axis. Code: def angulosXplano (puntoA, puntoB): def calcularAngulo(uno, dos): # Compute the angle rads = math.atan2(-dos,uno) rads %= 2*math.pi degs = math.degrees(rads) return degs dx = puntoB.x - puntoA.x dy = puntoB.y - puntoA.y dz = puntoB.z - puntoA.z yaw = calcularAngulo(dx, dy) roll = calcularAngulo(dy, dz) pitch = calcularAngulo(dx, dz) For calculating the shoulderPitch angle of the robot I use the roll angle (that I got from the fuction angulosXplano) and for the shoulderRoll I use the pitch angle. The reason of calculating the angles in this way is because I get better results than the results obtained if I calculate the shoulderPitch with the pitch angle and the shoulderRoll with the roll angle. The angular movement for the shoulder roll is good, but the should pitch is moderate and the roll elbow movement is the worst because the shoulder movement affects to the elbow movement. CASE B: Also I try to get the shoulder pitch and roll and the elbow yaw and roll with the quaternions of the SDK of the Microsoft. In this case I tried the quaternions for the elbow, I obtain the quaternion like: data.calculate_bone_orientations()[JointId.WristRight].hierarchical_rotation.rotation_quaternion For the elbow I use the Wrist-Right because I read that the position of the joint depends on the previous joint. After obtain this quaternion, I convert it to Euler angles, the next code it’s a method of a class that I had developed. The class has got the qw, qx, qy and qz quaternions params. def quaternion2euler(self): q = self qx2 = q.x * q.x qy2 = q.y * q.y qz2 = q.z * q.z test = q.x*q.y + q.z*q.w if (test > 0.499): roll = math.radians(360/math.pi*math.atan2(q.x,q.w)) pitch = math.pi/2 yaw = 0 elif (test < -0.499): roll = math.radians(-360/math.pi*math.atan2(q.x,q.w)) pitch = -math.pi/2 yaw = 0 else: roll = math.atan2(2 * q.y * q.w - 2 * q.x * q.z, 1 - 2 * qy2 - 2 * qz2) pitch = math.asin(2*q.x*q.y+2*q.z*q.w) yaw = math.atan2(2*q.x*q.w-2*q.y*q.z,1-2*qx2-2*qz2) return [roll, pitch, yaw] If use the yaw for the robot elbow roll, the results for the elbow roll improve a lot compared to the previous method. But I can’t find the angle for the robot elbow yaw. FUTURE CASE C: My next step would be try the case A but with vectors instead of 3D points. For example the vector A (middle-Shoulder and right-Shoulder) and the vector B (right-Shoulder and elbow). But I have not developed anything yet. Any help that can help me to improve the code, some bibliography or any better idea would be welcome.
I'm working with a robot intended to be placed in a tele-echography environment. To control the robot I'm using a 6D space mouse that control each degree of freedom of the robot. However, since the rotation is made in the end effector, the end user would have difficulties in understanding where to move the mouse in order to do the desired motion, since the end effector's reference is constantly changing. So, I'm thinking of doing a graphical representation of the motion of the robot in real time while the user controls the robot. The robot comes andith many API's to control it and to get sensor data. I'm currently using Qt Creator (C/C++) in order to send the mouse's commands to the robot, so I would like to integrate in my program some kind of simulator. What do you recommend as a C++ package / program in order to accomplish this? Thanks
I am seeing on some videos robots picking items and putting them somewhere in order. Here are some examples: https://www.youtube.com/watch?v=wg8YYuLLoM0. https://youtu.be/ggFdvUlp8YU?t=38 How are these types of manipulators called and where can I find schematic illustration explaining principles of its work?
So I'm trying to incorporate pose graph optimization into a mapping framework using a Lidar. So I basically have all the relative transformations between the pointclouds and I have pairs of pointclouds which satisfy my place recognition algorithm so I know which poses to complete the loop with, now the question I have is given that I only have these relative transformations (1) how do I calculate the error where $\hat{z}$ is the ground truth since I only have one set of measurements which are my R,t estimate from consecutive pointclouds? (2) How do I loop close using g2o? (3) What will my information matrix be isn't it supposed to be a property of the sensor itself? Thank you.
i am building a rc/robot mower. most of the youtube videos show the sabertooth motor controllers being used to connect the rc receiver to the dc motors. But here in Australia, the sabertooth i need cost about 200 dollars. RC esc sell for about 10 dollars. What is the difference between the esc and the sabertooth and can i use an esc instead? the specs i have on the motor and battery are 12v 10amp normal, 35amp stall. and my rc is a flysky
I'm currently programming in RobotC, for a Vex 2.0 Cortex. I'm using encoders to make my robot go straight. This is my code: #pragma config(I2C_Usage, I2C1, i2cSensors) #pragma config(Sensor, dgtl2, , sensorDigitalIn) #pragma config(Sensor, dgtl7, , sensorDigitalOut) #pragma config(Sensor, I2C_1, , sensorQuadEncoderOnI2CPort, , AutoAssign ) #pragma config(Sensor, I2C_2, , sensorQuadEncoderOnI2CPort, , AutoAssign ) #pragma config(Motor, port1, RM, tmotorVex393_HBridge, openLoop, reversed, encoderPort, I2C_2) #pragma config(Motor, port10, LM, tmotorVex393_HBridge, openLoop, encoderPort, I2C_1) //*!!Code automatically generated by 'ROBOTC' configuration wizard !!*// /* Port 1 is right motor*/ //all functions expect no reverse motors (in port menu) //**GLOBAL VARIABLES** int buttonSTATE = 0; //**MOVE FUNCTIONS** void goforwards(int time) { int Tcount = 0; int speed1 = 30; int speed2 = 30; int difference = 5; motor[LM] = speed1; motor[RM] = speed2; while (Tcount < time) { nMotorEncoder[RM] = 0; nMotorEncoder[LM] = 0; while(nMotorEncoder[RM]<3000) { int REncoder = -nMotorEncoder[RM]; int LEncoder = -nMotorEncoder[LM]; if (LEncoder > REncoder) { motor[LM] = speed1 - difference; motor[RM] = speed2 + difference; } if (LEncoder < REncoder) { motor[LM] = speed1 + difference; motor[RM] = speed2 - difference; } wait1Msec(100); } Tcount ++; } } //**CONTROL STRUCTURE:** task main() { goforwards(1); } When I execute the code, the Robot's encoder values are very close, but the robot quickly starts to veer to the left. What are possible causes of this? Is it something in the code?
I'd like to know how to form my pose graph if the only information I have available is that from a camera, (1) What are my poses? Are they just the accumulated transformations from pairwise matching? (2) And then what are my edges? (3) If I have already detected the loops, how would I loop correct? Thank you.
I want to control this solenoid from Arduino, but I am confused which transistors, resistors and diodes to choose. I have seen a lot of tutorials about controlling solenoid from Arduino, but all of them are for 12 volt solenoid or are using some relays which I don't want to use. I will be using 6 of these solenoids for my project.
I am using the gnss-sdr library to compute ephemeris from the GPS message, and to try to make sense of things, I am reading the well-known IS-GPS-200E specification. To compute ephemeris, the time from ephemeris reference ephoch is defined as (page 101, table 20-IV) $$t_k=t-t_{oe}.$$ I am unsure about how $t$ is defined, and I find the specification unclear on that. Rummaging in the source code of the aforementioned library, I found out that $t$ seems to be computed as follows: $t=t_{x}-b$ where $b$ is the satellite clock bias, and $t_x=R_x- (\text{pseudo-range})/c$ where $c$ is the speed of light. However, I have been unable, so far, to find out what $R_x$ exactly refers to. It seems to correspond to "time of week at current symbol", but there is no documentation/precison on that. I suppose that an expert could very simply deduce what $R_x$ is just from the formulas, though. So my question is: what is $R_x$? What time system is it expressed in (satellite time? gps time? receiver time?). And if someone could explain to me what those formula are doing or give me pointers, I'd be extremely grateful.
I'm trying to make swarm robots that use 8 IR LEDs and 8 Photodiodes arranged alternately along the circumference of the circular body to determine the range and bearing of other nearby swarm robots (similar ro Rice University's r-one) Each IR LED and Photodiode is wired as shown below: The IR LED's and Photo-diodes on one robot are separated by some opaque object. The intention is that when a high analog value is read from certain Photo-diode(s) of the 8 present, another robot's relative range and bearing can be estimated. The problem is that a high analog value is read even when the robot is near an obstacle because of the reflect infrared light from it's own LED's. Is there any way for a robot to determine if a high analog value read is because of another robot or because of an obstacle? Thanks in advance!
I need to use continuously rotating servo for a camera stabilization system. My professor bought servos that have already been modified for continuous motion--there's no stop in the gears, and the potentiometer allows it to spin 360+ degrees. I am currently using PWM with an Arduino Uno. The servo does spin continuously, but not in a stable way. I've also taken out the potentiometer in another one of the servo, and on a third servo I used a voltage divider in place of the potentiometer. I've tried static values and a "sweep" from 0% duty cycle to 100% to get a feeling for how they work, but I just cannot figure it out. I greatly would appreciate any tips on this. Here is my code: //PWM test for continious motion Servo int servoPin = 9; // connect servo to pin 10 int pwmVal = 0; // declare pulse width modulation value void setup(void) { pinMode(servoPin, OUTPUT); //set up the servoPin as an output pin Serial.begin(9600); // begin serial monitor } void loop(void) { //for loop that sweeps values from 0 to 255 for (pwmVal = 0; pwmVal <= 253; pwmVal += 1) { analogWrite(servoPin, pwmVal); Serial.println(pwmVal); delay(100); } for (pwmVal = 253; pwmVal >= 0; pwmVal -= 1) { analogWrite(servoPin, pwmVal); Serial.println(pwmVal); delay(100); } //assign a static pwm value pwmVal = 0; analogWrite(servoPin, pwmVal); }
I have installed and am trying to run a turtlebot package using gazebo and roslaunch. Installation seems to have gone fine and I am now following the first tutorial, which just explains how to get the simulation started. The tutorial can be found here: http://wiki.ros.org/turtlebot_gazebo/Tutorials/indigo/Gazebo%20Bringup%20Guide I entered the command: source /opt/ros/indigo/setup.bash That seemed to go fine, there were no errors. Then I entered the command: roslaunch turtlebot_gazebo turtlebot_world.launch, which resulted in the following error log: while processing /opt/ros/indigo/share/turtlebot_gazebo/launch/includes/kobuki.launch.xml: Invalid tag: Cannot load command parameter [robot_description]: command [/opt/ros/indigo/share/xacro/xacro.py '/opt/ros/indigo/share/turtlebot_description/robots/kobuki_hexagons_asus_xtion_pro.urdf.xacro'] returned with code [1]. Param xml is (param command="$(arg urdf_file)" name="robot_description"/) The traceback for the exception was written to the log file I tried asking on the ROS Answers website first but got no answer, so I'm hoping the good people of Stack Exchange can help me figure out what is causing this problem. Additional information: The version of ROS I have installed is Indigo, and I'm on Ubuntu 14.04
I am controlling a 6-DOF robot. For this, I want to compute the Coriolis Matrix. From my study of examples, I understand there are several ways of going about it. At the moment, I am using the theory based on, "A Lie group formulation of Robot dynamics" [p. 615] by Park et al. But this is not computationally efficient as my simulations are very slow. Based on my study of several projects on Github, I understand many people choose symbolic approach. I was also considering numerical differentiation to get the Christoffel symbols. I would like seek some guidance regarding the pros and cons of the different methods. I would really appreciate any references that I can use to study.
What exactly is the difference between both the above terms? From some of the papers I realize that force closure depends on the frictional forces. Is it correct? Suppose I to grasp a cylindrical object with my hand, which closure will it be considered?
I'm using a brushless motor of 1000kv and an ESC simon30A. I power the motor using arduino adapter AC/DC 9v. When I try to run the motor using sweep example. The motor will only run when the value is between 170-180. How come other values below 170 won't run the motor? Is my ESC broken?? The model of bldc is A212/3T. 1000kv and ESC is Simon 30A . I'm using servo Library to control the speed.
I have an array (2x4) in Matlab which may contains integer values as well as values in decimals. For example: [1.1, 23, 1.56, 5.29; 2.14, 2.39, 67, 4.001]. I have to send these values to arduino using matlab. How to do so? I know how to send integer values to arduino from matlab but it is not working with decimal values. Matlab Code to send integer values is below: portName = 'COM5'; s = serial(portName,'BaudRate',9600,'Terminator','LF'); s.timeout = 1; try try fopen(s); catch delete(s); fopen(s); end catch disp('Unable to open the port '); end angle = [1.3,2]; dataOut = angle; dataOut_ = char(dataOut); fprintf(s,'%d',dataOut_); Arduino code is given below: int d1,d2; char d[4]; void setup() { // put your setup code here, to run once: Serial.begin(9600); pinMode(13, OUTPUT); } void loop() { // put your main code here, to run repeatedly: if(Serial.available()>0) { for(int i=0; i<3;i++) { d[i]= Serial.read(); } d1 = d[0]-'0'; if (d1 == 1.3) { digitalWrite(13, HIGH); // turn the LED on (HIGH is the voltage level) delay(2000); // wait for a second digitalWrite(13, LOW); // turn the LED off by making the voltage LOW delay(1000); // wait for a second } } }
I am working with a raspberry pi which has some positional sensors and I manage it from my mobile with an app I am developing. Now I am trying to understand how to implement this algorithm to code, but I don't really know how to start, so I would be really great if someone can help me with some starting code or similar because I cant find this algorithm implementation.
I have been using ardupilot on drones for a while and I don't exactly know what it does. I know it keeps a drone leveled, lets us set way points and automatically fly through them, etc. Is that it? If so why is the pixhawk so expensive? don't you just need a cheap imu and gps with a $5 pi zero. I might be mixing up pixhawk's hardware and ardupilot. But yeah what do they do individually? and how do they do it? It it just hard coded to add more thrust to a few motors if its tilted, just use gps to go to a location, etc or is there more to it.
Disclaimer: I am a beginner, both to this forum and to robotics. I work in IT, and the guys in my department have decided we would like to build a robot for the office, as a sort of hobby/team-building exercise of sorts. Our goal is to create a robot that is spherical, like BB8, but randomly navigates the room like a Roomba. I've seen examples of BB8-like robots online before, but all the ones I have found have used a remote to manually control their movements. This seems like a difficult first project, and personally I would like to try something more basic to begin with, but I figured I might as well look at feasibility of the project before I rain on their parade. The way I see it, there's two possible ways we could go about this: 1. Use an iRobot Create 2 and somehow adapt it to a spherical body 2. Start from scratch on a BB8 robot, and write a program that mimics a Roomba's behavior. (I have seen several examples of this online using Arduino and Raspberry Pi) My question is: how difficult is it to write a program mimicing the Roomba's behavior? If it is very difficult, then perhaps I should simply buy a Create 2 and go from there. Sorry if this is a broad question. If it is not appropriate for this forum, please direct me to a more suitable forum for beginners in robotics who have stupid questions like mine :p
Can anyone explain me in detail, what a industrial robotic arm controller does? What are its components? Does the industry use the opensource controllers like Arduino? I saw most of industrial controllers look very big whereas the hobby robot arms are having small controllers mostly made from arduino? Also, if I were to reconstruct it, what are the topics that I need to learn?
I am really new to the topic. There doesn't seem to be lot of overlap between Industrial robotics and Hobby robotics (atleast in certain areas like control etc).Please correct me if i am wrong. I actually tried going through Fanuc website, and most of the content is restricted. I would like to know if there is any course on how to operate Industrial robots? its PLC programming? or any application specific course etc?
Is monocular visual odometry able to estimate relative scale? Say I have a sequence of 10 images that are taken on a single track each 1 m after the previous. Can some mono odometry method distinguish relative scale when it processes image pairs that are in various distances from each other? I mean like processing 1st vs 10th image and 9th vs 10th image - will the fist give 10x relative scale than the second? I am examining OpenCV based odometry code (https://github.com/avisingh599/mono-vo) but it only gives something like "translation vector" that always has a size of 1 regardless the distance measured. I know mono odometry can not do absolute scale but I thought it can do relative (question is what "relative" actually means here). Seems like OpenCV's recoverPose only do a translation vector that has always the same size (I guess the size is 1)?
I am working on an ground surveillance robot using an Arduino mega for programming, am using components like the HMC5883L compass, Adafruit GPS for assigning of coordinates (latitude and longitude) which are the way points, I have written up the code for both the compass and the GPS and am able to get information from them, but now what I want for my robot to move to those specified coordinates (latitude and longitude waypoints) which I don't know how to do if anyone could just write an example code for me or push me to the write place I could get a sample code please do pardon me for asking such question cause I am new to coding GPS and compass and I would appreciate it if anyone could help me out or explain a bit in details what I need to do please find my code here http://textuploader.com/drqwv
When ever i move the servo motors from the arduino, it produces sound. I don't want the servos to make this sound. How can i remove this sound. As i think this sound must be coming from the gears used inside the servo motor, but how can i silent those sound?
I have a problem with the Orocos IK solver, especially with the KDL::Rotation matrix input. I try to call my KDL IK solver with a normal vector and a rotation. If I use the example values for the rotation everything went well. But as I tried to use my "own" orientation, the IK solver didn't find a solution. KDL::Vector vektor(0.457101 , -0.629513, -0.627317); KDL::Rotation rotation(1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0); //myValues KDL::Frame end_effector_pose(rotation, vektor); rc=kdl_solver.CartToJnt(origin,end_effector_pose,result); As you can see it's a simple identity matrix -> no rotation -> in my opinion it should work. Anyway, if I try to call it with any other rotation matrix, it doesn't find a solution. Just in case of KDL::Rotation rotation(-0.0921467,0.939391, 0.330232, 0.925128, 0.203433,-0.32055,-0.368302,0.275969,-0.887803); it terminates with a valid solution. These values are test values from my robot. Do I have a wrong comprehension of the rotation matrix ? Thank you very much for your time Devel EDITE: With .msg communication it works. I have no idea why ? Does anyone know how the following lines construct the rotation matrix ?? geometry_msgs::PoseStamped pose_msg_in = pose_stamp; tf::Stamped<tf::Pose> transform; tf::Stamped<tf::Pose> > transform_root; tf::poseStampedMsgToTF(pose_msg_in, transform); tf_listener.transformPose(root_name, transform, transform_root); KDL::Frame F_dest; tf::transformTFToKDL(transform_root, F_dest); int ik_valid = ik_solver_pos->CartToJnt(jnt_pos_in, F_dest, jnt_pos_out); Or is the matrix related to something? I get the feeling that I miss a important information. SOLVED: Sorry, for the delay. Shahbaz answer was totally right. I simply overestimated the capabilities of my robot. The position was not reachable. After using Moveit(visualisation), it became clear that the orientation is not possible for the robot at that position (x,y,z). THANKS
I am trying to simulate the experiments in the paper "Time-Domain Passivity Control of Haptic Interfaces", Hanaford and Ryu, 2002, IEEE transactions on robotics and automation vol .18, No1 about the simulation in Fig.8 and its results are fig.9 and fig.10. I don't know an exact model of fig.9 which I can draw again in Matlab/simulink to run. I tried drawing spring and damper as in fig.6 of the same paper, but I don't know which is my input ( force, velocity or position) and also the type of signal of input (sine, step,...) to get a similar result as the paper. I added 2 images, one is model which i drew and another is the signal of position. I am really confused about initial condition of position . should either I choose a initial condition at Discrete-Time Integrator1, or adding a difference of 2 adders into position. In here: damping=0 ( i want to check the case without Passivity controller) k=30000 and I added initial condition to Discrete-Time Integrator1 is 50 [![enter image description here][3]][3]
I have a standard VEX Clawbot, which I've been trying to make go straight for some time. I've been following this guide: http://www.education.rec.ri.cmu.edu/products/cortex_video_trainer/lesson/3-5AutomatedStraightening2.html This is my code: #pragma config(I2C_Usage, I2C1, i2cSensors) #pragma config(Sensor, I2C_1, , sensorQuadEncoderOnI2CPort, , AutoAssign ) #pragma config(Sensor, I2C_2, , sensorQuadEncoderOnI2CPort, , AutoAssign ) #pragma config(Motor, port1, leftMotor, tmotorVex393_HBridge, openLoop, driveLeft, encoderPort, I2C_1) #pragma config(Motor, port10, rightMotor, tmotorVex393_HBridge, openLoop, reversed, driveRight, encoderPort, I2C_2) //*!!Code automatically generated by 'ROBOTC' configuration wizard !!*// void GOforwards() { nMotorEncoder[rightMotor]=0; nMotorEncoder[leftMotor]=0; int rightEncoder = abs(nMotorEncoder[rightMotor]); int leftEncoder = abs(nMotorEncoder[leftMotor]); wait1Msec(2000); motor[rightMotor] = 60; motor[leftMotor] = 60; while (rightEncoder < 2000) { if (rightEncoder > leftEncoder) { motor[rightMotor] = 50; motor[leftMotor] = 60; } if (rightEncoder < leftEncoder) { motor[rightMotor] = 60; motor[leftMotor] = 50; } if (rightEncoder == leftEncoder) { motor[rightMotor] = 60; motor[leftMotor] = 60; } } motor[rightMotor] = 0; motor[leftMotor] = 0; } task main() { GOforwards(); } I am using integrated Encoders. When I run the code my robot runs without stopping and the Encoder values diverge quickly. This is a video of the code running from the debugger windows: https://www.youtube.com/watch?time_continue=2&v=vs1Cc3xnDtM I am not sure why the power to the wheels never changes, or why it seems to believe that the Encoder values are equal... much less why it runs off into oblivion when the code should exit the while loop once the right encoder's absolute value exceeds 2000. Any help would be appreciated.
I know,this question is irrelevant.But,while working suddenly this question came through my head..my point is there should not be any difference between a voltage source and a current source.cause,these two are dependent on each other.if there is a potential difference then,there will be a current flow...similarly,if there is a flow of current then there must be a voltage difference....isn't it? please,make the topic clear.....
I have been reading this paper (https://arxiv.org/pdf/1509.06113.pdf), which is about control of a robotic arm. They learn a mapping from robot state to robot control, where the state is the positions and velocities of the arm's joints, and the control is the joint torques. One passage I am struggling to understand is: Since we use torque control, the robot and its environment form a second-order dynamical system, and we must include both the joint positions and their velocities. (I've edited this slightly for readability, but effectively it is the same). Please can somebody explain what this means? What is a second-order dynamical system? And why does this mean that velocities are required as part of the state? Thanks!
Lately I've been thinking about 6 axis robots and noticed that all examples that I've seen on the internet use the same configuration of axes: Vertical axis waist (1), horizontal axis shoulder (2) and then the axes are perpendicular (3), parallel (4), perpendicular (5) and parallel (6) with the previous link. This means at least three different types of joints, one for waist (1), one perpendicular (2, 3 and 5) and one parallel (4 and 6). I was wondering if comparable range of motion can be achieved by having only one type of joint in addition to waist. I thought that having only axes perpendicular to previous segments (like 2 and 3 in the picture), but instead of consecutive pairs of axes being parallel to each other, have them twisted by some fixed angle (45 degrees?) relative to each other. Would an idea like this work? Would it have any significant disadvantages? Is there some general method to visualise what positions can be reached with a given configuration of axes?
I am using Hitec Servo Motor in my 6 Dof Robotic Arm. I am going to run an open loop response and compare it with simulation which using a transfer function in MATLAB. But I could not find any parameters that I need in a transfer function. For example, moment of inertia, damping, electric resistance, electric inductance, and back-emf constant. What I had found on the data sheet are operating speed, output torque,idle current,running current and dead bandwidth. How to relate all these to get the parameters that I need to develop my transfer function ?
Is there a formula to compute the field of view of a robot when it is obstructed by other objects like in this picture When it is a full circle it should be FoV = (x-c1)² + (y-c2)² < r² Where (c1,c2) are the coordinates of the robot and r is the range of the sensor. Thank you!
Searching for electronic gyros doesn't turn up what I am after. Rather than an instrument to measure rotation, what I want is a device that I can mount in a flying machine, where I can apply a certain moment in order to have the flying machine react with the opposite moment in order to control attitude. A classic physical gyroscope behaves like this, but I am unsure if there are any available with electronic controls. Is there a name for such devices, that I can Google for? To clarify, this isn't asking for a product recommendation, just want to know if such a thing exists and what it would be called.