instruction
stringlengths
40
28.9k
I am not very experienced in working with C++. Working with Python is much more easier for me because of the large number of built-in mathematical functions present such as multidimensional array sorting and more. I went through the Udacity document on Motion Planning (Programming a Robotic Car) in Python and it seemed simple enough for me to implement. However, I would like to use the Arduino due to its simplicity and my need to interface other modules with it. Can someone suggest me an easier way or lead me to some resources that can help me code it in C++ for the Arduino? All the A* algorithm C++ source codes I've found online are either huge or difficult to understand. In my case, I'm planning to use a simple grid based implementation of the A* Algorithm. Thanks in advance.
I'm building a quadrotor robot and my supervisor has asked me to make a report of the IMU modules used in previous works, I couldn't find such reports however. Does anyone know how to find such reports or how to find out what modules they've used? It's really urgent. Thank you all.
We're participating in a contest with our robot, and one of the challenges is to scan a pattern and see which pattern it is. There are 2 possible images(attached below), and for each of them the robot should place a cube in a specific place. We were thinking of using a colour sensor but that's not precise enough. The robot is wired to a smartphone, so we also thought of taking a picture of the pattern and making a java program to interpret it(that's what we've programmed the robot in), but we couldn't find any tutorials about that on the internet. So we will have 2 separate programs, and the only thing we can't work out is how to process an image so we can see which image we got and tell the robot to run the corresponding program. How could actually do this? The images of the only 2 possible patterns:
I am currently in the process of writing a short sci-fi play about a team of researchers on Titan. With them, I'd like to have two four-legged robots (which would be actors in costumes). I'd like them to resemble Boston Dynamic's Spot or BigDog, with skinny legs and a bulky torso. My question is: would the small foot design of those robots work well in desert areas? Walking through and over dunes, sandy beaches, that kind of thing. What's the best foot design for a legged robot traveling in sand?
I have a controller board for a motor, where I can read the motor current (to transform to torque), the position, the velocity and I can give a velocity command. There is not really a torque mode on this controller, that is I can only command velocity and read torque but I cannot give a torque reference. How can use it to give a torque reference?, hence to comply with the standard robotic equation of the type $\tau = m \ddot{q} + H(\dot{q},q) + G(q)$ That is that I get the $\tau$ value to send as reference, but in my case the controller board I have receives $\dot q$ and sends $\tau$. Is there a way to still use the standard equation for control ?
I am implementing a Kalman Filter for the following situation. I have a camera set in a room that can detect the position and orientation of a marker (ARUCO) in the room. Therefore I have the following frame transformations: What I want to filter with the Kalman is the position and orientation of the marker: $[x, y, z, \phi, \theta, \psi]$ in the room frame. I already have a prediction model for the marker (constant-velocity). I am writing the observation model equations. I have the following relationships: $$ X_{observed} = X_{marker/camera} = R_{camera/room}^T \cdot X_{marker/room}$$ and $$R_{observed} = R_{marker/camera} = R_{camera/room}^T \cdot R_{marker/room}$$ With these expressions, I express the observation with respect to the estimated variables: $$X_{marker/room} = (x,y,z)^T$$ $$R_{marker/room} = eul2mat(\phi, \theta, \psi)$$ However, the function $eul2mat$ and the matrix multiplication introduce some non-linearity. Which forces me to use an Extended Kalman Filter. Now I still can figure out the math of this. But the math becomes too complicated for what I'm trying to solve Of course, if I look at the problem differently. Let's say I only try to filter the pose of the marker in the camera frame. Then the equations are much simpler (a lot of identity matrices appear), and the system is linear. So here is my question: Is there a way to make the equations of this system simpler? PS : This is a simple case, where I don't really need to estimate the full transform directly (marker to room frame). But there might be cases where I need to estimate the full transform so that the state vector might be available for another filtering.
I am using Aldebaran/Softbank Robotics Pepper with Choregraphe software. There are some loan words in my dialogs. When Pepper say these words they often don't sound natural. e.g. the german word Demenz (eng: dementia) The first e is to long. I imagine some synax like that De[0.5]menz or phonetic transcription [deˈmɛnʦ]. Is this possible?
I have a linear map, $J$, from one space to another. I can transform a covariance matrix, $P$, from one space to another by using: $P' = JPJ^{T}$ However, in my situation I have an inverse covariance matrix, $P^{-1}$ The simplest way that I can think of transforming this is: $P'^{-1} = (J(P^{-1})^{-1}J^{T})^{-1}$ which then using $(A*B*C)^{-1} = C^{-1}B^{-1}A^{-1}$ should give $P'^{-1} = (J^{T})^{-1}P^{-1}J^{-1}$ however, when I try this in MATLAB I do not find that these two representations are equal. $(J(P^{-1})^{-1}J^{T})^{-1} = (J^{T})^{-1}P^{-1}J^{-1}$
I have bounding boxes detected in RGB images, but no information about the camera's intrinsic properties. Using deep learning (preferably) can I find the distance to the object from the camera in real world (meters or cm)? If I find the camera pose (using PoseNet) does that help?
We are working in a new agricultural rover, and we opted for PixHaw-II+ROS+MavLink platform as a good control alternative. However, our project is stuck as the PixHawk seems to work only when it is using Drone configuration: using Drone mode, power is released when performing drone-commands(wheels start moving). Once the configuration is changed to 'Rover' mode, there is no power output at all, even after calibrating the sensors and the battery (with this configuration, no PWM signal is shown when hooking up an oscilloscope. We used Qgroundcontrol and Mission planner for the configuration, but none worked. Has someone faced the same issue? we are quite sure this is a firmware-related problem. So any advice or trouble-shooting strategy is really welcome! Thank you!
I drawn the Simulink model of a quadcopter with virtual joystick input and now I want to add the wind but I am little bit confused how to add wind with that model. actually my model is like that input joystick to the rotational+ translational and get output from 3d Animation. I add wind model to my rotational sub model my but some one says like you can not add x,y and z component to the pitch , roll and yaw. so I trying to add this wind with thrust force like toque along x direction add with wind x component and torque y with wind y component and torque z with wind z component but still I think there is problem in units like I am adding force with velocity so its not make sense. I saw some paper in which wind is added like force and momentum component but I am not clear with that idea , can anyone help how should I connect wind component and why should I add momentum and force component???
Have a bunch of work in setting up a flask webserver to control a robot from an app, and I'm not too pleased with the results for driving. Touch sensitivity issues are driving me nuts, and looking to maybe try a slightly different route instead. The flask server has a bunch of empty page web requests(my term) where it has a page for http://192.168.10.1/Forward and some python code to control the motor controller. When you go to that URL the code executes in the background and the car goes forward. Have all the URL's mapped to buttons in an app and it works till the screen thinks you let go which is constant and pretty aggrevating. I've now setup a python script with EVDEV to find and search for joystick events which works. I thought I could use something like this on the event: webbrowser.get('lynx').open('http://192.168.10.1/Forward') And it would goto the url with Lynx...but sadly Lynx then wants you to hit Q to Quit then you can send another Joystick command. I guess there's a way to kill the process after each hit but that seems messy. If there another way I might be able to do something like that without installing the GUI and does not involve having to figure out how to make two scripts talk to one another a different way all together? Regards, Matt
I just completed a Raspberry Pi car setup shown in the image below. With my car now assembled, I can run a simple Python script put the necessary GPIO pins HIGH or LOW and moves the car around successfully. Now, I have added ROS to the mix and I want to drive the car around using the Turtlebot teleop package. Roscore is launched and when I execute rostopic list I get the following: /rosout /rosout_agg Obviously, I am missing the /cmd_vel topic. How can I make ROS aware of the robots' mobile base in order to begin driving it around, please?
This may be a very basic question, but I haven't had much success finding an answer. I'm currently trying to solve the inverse kinematics of a 3-RRR Planar kinematic arm, whose purpose is to move between points on a graph. In most theoretical cases the (x,y) coordinates and the orientation of the end effector are known. However, in my case I only know (x,y) coordinates and frankly don't actually care about the orientation of the robot. My current approach has been to use DH-parameters to solve for each angle. This works, assuming I know the orientation of the end effector, which I do not. Any help will be greatly appreciated, thank you!
I am interested in this robot chasis: 4 wheel robot chasis There is no information on how to install encoders although the manual says it has space for encoders. The page says that the motors do not have rear shaft, so, I was wondering if I can use encoders on this chassis, and if so, how can they be mounted? I do not want to mount them on the wheels.
Could a small drone electric hot air balloon be made to stay in the air indefinitely? Use an electric element for heating the air. Powered and tethered from the ground like a kite on a string it would not need a battery.
I'm making a flight controller for quadcopter. It is very stable when regulated by angular velocity, but it is horrible when regulated by angle. What are the potential problems when regulating a quadcopter by angle? System description: Input: I'm using MPU6050 configured with low-pass filter for 185Hz cut-off with 2ms delay on both the 3-axis acceleration and angular velocity data. The angular velocity is filtered again with low-pass filter: (AV = AVn * alpha + AVn-1 * (1-alpha)), alpha = 0.8 The accelerometer angle is calculated using: pitch_a = atan2(x_a, sign(z_a) * (sqrt(sqr(z) + zqr(y) * 0.001))) * 180/pi; roll_a = atan2(y_a, (sqrt(sqr(x) + zqr(z))) * 180/pi; The angular velocity is integrated over the last fused angle: pitch_g = pitch + av.y * dt; roll_g = roll + av.x * dt; And the resulting angle is fused with the calculated accelerometer angle using complementary filter with alpha = 0.98. This is done with new sensor values every 2ms. Regulation: I'm using cascaded PI-P controller. The inner P controller's input is angular velocity and its output is motor power difference. The outer PI controller's input is absolute angle and its output is angular velocity. The outer controller is disabled when controlling the quadcopter by angular velocity. The inner controller runs every 2ms with accuracy of 8us and output limit of +-70. The outer controller runs every 8.5ms with accuracy of ~100us and output limit of +-250. I'm using this library, slightly modified to run when I call it, output integers, use microseconds and zero the integral sum when at very low throttle. The setpoint is sent from the remote every 50ms, slightly filtered. Output: The output from the inner controller is added and subtracted to the according motor. The motors drivers' are controlled with 8-bit 490Hz PWM ranging from 127 to 254. When controlling the quadcopter by angular velocity it flies stable with P value ranging from 0.2 to 0.8, noisy sensor input and even with one different motor. When controlling it with angle it behaves randomly. It starts to pitch or roll from level orientation, it seems to respond well to step input when it has to pitch or roll but returns to level very slow and overshoots... The angle control feels like a very bad angular velocity control (even tho the angle input is cleaner and it shouldn't have noticeable delay based on the fusion equation). The P range I have tried is approximately 0.5 to 2.0 and the I range is approximately 0 to 0.3. The inner controller runs 4.25 times faster the outer controller. What can be the problem?
I'm trying to use a linear actuator (sample electric cylinder) for a soil penetration device. Originally, I thought of using a pressure sensor to know when/if I hit a rock. However, I am not seeing many linear actuators with integrated pressure sensors but they do come with potentiometers. So I am thinking, would it be possible to use the potentiometer as a rough way to know if I hit something? Basically, my program would check if its position does not change for a small time interval. If the position has not changed before it extends the maximum length, then the actuator must have hit something it cannot penetrate well. Anyone have any ideas?
I want to design a LQR controller for my two wheeled mobile robot. While designing the controller, I need to find gain, K for the LQR which this required the state space model of wheeled mobile robot. My question is how to find the matrix A, B, C and D for this nonlinear system by using the equations below. The outputs are x, y and theta while inputs are v and w.
It's me the greatest noob again.thank you very much for your last help guys.Now,this time I am with a line following robot.I want to use PID control in it.But I'm not getting any perfect resource from where I can learn it.I want to be fluent with it and learn it from the deep.please suggest me some resource from where i can get help.suggest me something that will teach me from beginner level to pro level.I want to learn it from the root and also implement it in my line follower simultaneously. you can also suggest any good algorithm or control system for my line follower.it will be very kind of you. Thanks in advance.you guys are awesome.
I have been using dual RTK receivers for my tractor projects. This gives me reliable position, heading, and tilt at low rates (10 Hz). I recently began working with an INS (an Advanced Navigation Spatial Dual) and it's wonderful to get a full Pose from a single device at up to 1000 Hz. The problem I have is that an INS like this is expensive (~$12K) and total overkill for what I'm doing (driving on ground at 5.5 MPH). Is there a way to achieve an adequate result using FOSS? The articles I find on Kalman filtering seem to always assume that the NavSatFix does not provide sufficiently accurate absolute position data. RTK, however, does provide sufficient accuracy and precision for my needs. Note that while I do have radar ground speed available, my strong preference is to just use RTK receivers and a MEMS IMU. Let's say that I want specifications comparable to my Spatial Dual but the update rate doesn't need to be over 100 Hz. Update: I found this paper which addresses the constraints of a (slow land-based non-holonomic) vehicle.
I recently had an exam in robotics and one of the questions was about forward kinematics on the Stanford manipulator below. The DH parameters were given and we were asked to calculate the angular velocity $v_6$. The question mentioned that there was a trick, a shortcut of sorts, so that we would arrive at the answer faster than going through the full forward process, calculating $v_{1-6}$ and $w_{1-6}$ using the all rotation matrices. Does anyone know that trick? Would be really helpful to understand something like that for the resit, when you only have 90min for the whole exam. Thanks!
I recently purchased an iRobot Create2 and have had a lot of issues in the last few weeks with the battery being dead when I go to use it. The robot almost never leaves its charger at this time because I am still working on coding it. This makes me wonder if there is a state I could be leaving the robot in that is preventing it from charging. The only way I can get it to charge is to do a hard reset which then allows it to start charging. My suspicion is that if you put it in full mode and disconnect from it that it can not charge. Can anyone confirm or deny this?
I need a covariance of parameter estimation for reliability estimation. According to the this post, I know the approximation of the Hessian can be calculated as $$ \sigma^2 H^{-1} = C $$ where H is Hessian, C is the covariance and sigma is the variance of residual. I am trying to understand where it came from and how it is derived but I could not find a related material anywhere. Anyone knows a good reading material regarding this?
Since the mini DIN power is limited to 250ma, has anyone found a way to tap the battery to supply power to added electronics? I'd prefer this over adding a secondary battery and separate charger Thanks, Frank
One of the nice contributions of SVO is that it has proposed a way to use 2d edge features in pose optimization by point-to-line constraint. But in the actual code, I could not find where it is assigning any feature with edge type property. I thought the feature detector part does it but it was not. The results from the feature detector do not say whether the feature is edge type or not. Anybody know how it works?
I am following this tutorial to install orocos in my system with ubuntu 16.06, xenomai 3.0.5. But i'm getting error: Orocos RTT version 2.9.0 No orocos-rtt.cmake file loaded, using default settings.See orocos-rtt.default.cmake Detected OROCOS_TARGET environment variable. Using: xenomai -- CMAKE_VERSION: 3.5.1 -- Found Boost Uuid in /usr/include. -- Orocos target is xenomai XENOMAI_INCLUDE_DIR=XENOMAI_INCLUDE_DIR-NOTFOUND XENOMAI_NATIVE_LIBRARY=XENOMAI_NATIVE_LIBRARY-NOTFOUND CMake Error at config/LibFindMacros.cmake:74 (message): Required library XENOMAI NOT FOUND. Install the library (dev version) and try again. If the library is already installed, set the XENOMAI_ROOT_DIR environment variable or use cmake to set the missing variables manually. Call Stack (most recent call first): config/FindXenomai.cmake:72 (libfind_process) config/check_depend.cmake:164 (find_package) CMakeLists.txt:106 (INCLUDE) -- Configuring incomplete, errors occurred! See also "/home/robot/orocos-toolchain/build_isolated/rtt/install/CMakeFiles/CMakeOutput.log". <== Failed to process package 'rtt': Command '['/home/robot/orocos-toolchain/install_isolated/env.sh', 'cmake', '/home/robot/orocos-toolchain/src/orocos_toolchain/rtt', '-DCMAKE_INSTALL_PREFIX=/home/robot/orocos-toolchain/install_isolated', '-G', 'Unix Makefiles']' returned non-zero exit status 1 Reproduce this error by running: ==> cd /home/robot/orocos-toolchain/build_isolated/rtt && /home/robot/orocos-toolchain/install_isolated/env.sh cmake /home/robot/orocos-toolchain/src/orocos_toolchain/rtt -DCMAKE_INSTALL_PREFIX=/home/robot/orocos-toolchain/install_isolated -G 'Unix Makefiles' Command failed, exiting. Before this i have sucessfully installed orocos toolchIN without the xenomai kernel. But now i'm getting the error.
I have a data-set containing points in SE(3), i.e the 4 $\times$ 4 transformation matrices and the corresponding dual quaternions. Now, I want to plot these points in a 3D graph. How to generate this plot?
I am building a robotic arm for pick and place application. The robot is going to recognize several objects using the RGB feed from Kinect (will use a model such as YOLOv2 for object detection, running at maybe 2-3 FPS) and find the corresponding depth map (from Kinect again) to be used with the kinematic models for the arm. The YOLOv2 network input size is 416x416 px. I will be working on Ubuntu 16.04 with ROS Kinetic. I also plan to work on indoor mapping and autonomous navigation in the future. Is the Kinect 360 sufficient to achieve this? Is it worth buying the newer Kinect for XBOX One and will the 1080p feed make a huge difference?
According to one of their brochures, Harmonic Drive numbers their models with numbers like 8, 11, 14, 17, etc. I noticed that Chinese suppliers like Laifaul have a similar numbering scheme, though it is not clear to me that they correspond to the same specs that that the Harmonic models do. Do these numbers have some physical significance? Torque? Dimension? Or are they just some legacy model numbering scheme?
I am trying to understand how to draw coordinate system for forward kinematics. Figure is Could someone please explain how is the coordinate system drawn as per DH parameter? Basically when do we have to shift the origin of a frame and when not? P.S: I mainly do not get how they drew frame 2. Why was it not shifted to frame 1's position?
I am kind of new to control systems and I am currently making a line follower using PID loop. I am using 6 IR LED Photo-transistor pairs to detect the line but I am not sure how to calculate the error. I searched online and many people use error = set point - position(current), but I am struggling to figure out how to use the input from the 6 sensors to get a precise position. PS- I tried the loop with digital sensor input and it worked (giving each case an error value) but I am planning to analog values of sensors. The line width is approximately 1.7 cm and it is a white line on a black track. Also this project is for a competition, and I would really appreciate any advice on which micro controller to use (currently using STM32F401RE or Arduino Uno) or any other thing I can do to make the robot faster and better. Cheers!
I wanted to design a LQR controller for my two wheel mobile robot. What are the equation of two wheeled mobile robot can be rearranged into state space model? My two wheeled mobile robot is nonlinear system. The outputs of the system are v, linear velocity and w, angular velocity while the inputs of the system are x, y and theta.
Our team just finished the FRC competition and I am trying to learn more about robots in general. I'm trying build a robot in a similar setting than the FRC competition. However the RobotRIO is out of my budget. I'm currently set on buying a Raspberry PI 3B as main controller board as replacement for the robotRIO. However, I do not quite understand how I'm going to power 12v motor. In our FRC build we had this power distribution panel, but I'm pretty sure I will fry my raspberry PI. I believe I need some kind of power distribution board to have 5v and 2.5a for the Raspberry PI. And also 12v on some other pin. Does this exist or am I looking for something that do not exist. My build would be powered by a 12v battery like these Interstate Batteries (am-3062).
I'm using L298N motor driver to spin stepper motor (42BYGHM809). When connecting it to batteries(2*9v), the batteries heats up after few minutes and stop working, and when connecting it to power source, the power source reach it's current limit. I think the driver has not enough resistance, but from searching the web I didn't see any tutorial that recommends using a resistor with the driver. I tried to connect 100ohm resistor and it burnt and tried 450ohm resistor and the motor vibrated but didn't spin. I saw in the web that when using high voltage it is recommended to remove the 12v connector(as seen in picture), but when removing it the driver doesn't turn on (the led remain off), like it not powered on (All the things i've tried above are with the connector). Thanks for your help. (source: bigcommerce.com) Schematic: Arduino code: #include <Stepper.h> #include <SoftwareSerial.h> // *** General *** const bool DEBUG_MODE = true; bool new_input = false; bool use_bt = true; int motorSpeed = 20; // *** Stepper *** const int stepsPerRevolution = 400; Stepper myStepper(stepsPerRevolution, 4, 5, 6, 7); float degCount = 0; // *** communication *** SoftwareSerial bt (11,12); int usr_input = 0; char sign; const int BT_WAIT_TIME = 3, SER_WAIT_TIME = 1; void setup() { if (use_bt){ bt.begin(9600); //bt.listen(); } Serial.begin(9600); myStepper.setSpeed(motorSpeed); } void loop() { // *** Get usr_input from BT or serial *** if (use_bt){ if (bt.available()){ Serial.println("BT"); delay(BT_WAIT_TIME); sign = bt.read(); usr_input = (int)bt.read(); if (sign == '-'){ usr_input = -usr_input; } new_input = true; } }else{ if (Serial.available() > 0) { usr_input = (int)Serial.parseInt(); delay(SER_WAIT_TIME); while(Serial.available()) Serial.read(); new_input = true; } } if (new_input){ new_input = false; if (DEBUG_MODE) Serial.print("User input: ");Serial.println(usr_input); rotate(usr_input); } } // Rotate motor approximmatly x degrees and return how much degrees it really did. float rotate(int degrees){ int steps = round(degrees / 360.0 * stepsPerRevolution); if (DEBUG_MODE) Serial.print("Steps: ");Serial.println(steps); myStepper.step(steps); float real_degrees = steps * 360.0 / stepsPerRevolution; if (DEBUG_MODE) Serial.print("Real deg: ");Serial.println(real_degrees); degCount += real_degrees; if (DEBUG_MODE) Serial.print("Deg count: ");Serial.println(degCount); return real_degrees; }
I see that, in the correction step of Kalman filter, there is an equation to update the covariance matrix. I have been using it in the form: P = (I - KH)P' Here P is the covariance matrix, K is the Kalman Gain and H is the observation model. The I is the identity matrix. However, I also see a different equation in some literature: P = P' - KSKT where S = HPHT + Q where Q is the noise matrix for the observation model. Are these two equations same? If so, how to prove it?
I have a point (x,y,z) in the 3D space. This point rotates by theta1 about the arbitrary axis ax1. This axis (ax1) rotates by theta2 about another axis ax2. What will be the new coordinates of the point. I am going to generalize this point transformation to rotation about n arbitrary axes in 3D space. What will be the procedure. Matrix rotation or Quaternion formulation. Thank you very much.
I am building a robot and I have a concern. Will it properly connect a 11.1V 30C 4500mAh to this motor driver : https://www.pololu.com/product/708 thanks in advance ..
these days I'm learning Orocos patched with xenomai. Although I have found the ways to create a hard real-time control software, I didn't find any information on how I should use this to control the servo motors and sensors like encoders with them? I have thought that I may have to use Ethernet cables to connect the main PC running the real time kernel with some kind of driver/servo drive so that the PC/micro PC and the driver can communicate with each other in hard real-time. I didn't actually find a practical solution in google like how, and which external drives or hardware should I use. So can anyone show me the way to complete this task? How should I make my linux PC or microcontroller running real-time software should communicate with industrial level of sensors and motors? I am mentioning industrial level motors cas my project needs the accuracy and control in that level. I don't want to learn any HDL or plc and make it more complicated.
What is the difference between this two. And which should i choose for hard real time control purpose? What are the cons and pros of them one to another?
I would like to use(or control) a gimbal brushless motor like a servo meaning I want to control position not rotation speed. The position must be from 0 to 180 and 0 to -180 I will use an Arduino board. And I think a ESC board Can I use PWM control offered by servo library? Is it possible? Have someone any idea a) If is possible to do what I want? b) Orientation(guideline) about library usage? Thanks in advance
Is there a database or website that has collected the seminal papers in different disciplines of robotics like machine learning, AI, mobile robots, etc. By seminal I mean papers that made a path-breaking impact on the theoretical side, for example, proved a theorem that captivated and inspired a large number of derivative works. So, I am mainly looking for high-impact papers that made fundamental contributions in mathematical modeling, algorithm design etc. and not so much on the hardware application side of it because in that respect videos of Boston dynamics or festo robotics are the better sources of inspiration. I know some seminal papers like:- Latombe's planning book Khatib's potential field method SLAM paper Kalman's derivation of his filter DP paper by Berketsas But similar seminal papers are missing in say robotic formation control etc. So my question has someone collected papers that rigorously and mathematically showed some big result in robotics.
In my dynamic model of two wheeled mobile robot, it has the total equivalent inertia, noted $I$, described by the equation below, $I_c$ is the moment of inertia of the DDMR about the vertical axis through the center of mass and $I_m$ is the moment of inertia of each driving wheel with a motor about the wheel diameter. May I know how to measure or calculate $I_c$ and $I_m$? My wheeled mobile robot is Arduino Robot
I'm looking here for suggestions. I have a HS-805BB with Torque (Kg-cm/Oz-in): 24.7/343 @ 6.0 V to rotate a robot body up and down from one side. It seems to me that this servo is always running at its maximum torque which is not good for its life span and battery consumption, so I thought of two solutions: Install a second servo on the other side and have both servos working parallel. Build a gear box to increase the Torque limits. What are you suggestions as to which option is most preferable?
I have this problem of an excavator from a test I took a while ago. Frame $\underline{\vec{e}}^0$ is the origin. Frame $\underline{\vec{e}}^1$ rotates around $\underline{\vec{e}}_{3}^0$ with angle $\theta_1$ Frame $\underline{\vec{e}}^2$ rotates around $\underline{\vec{e}}_{2}^1$ with angle $\theta_2$ Frame $\underline{\vec{e}}^3$ rotates around $\underline{\vec{e}}_{2}^2$ with angle $\theta_3$ The question is: I know that the velocity vector is expressed as: $^{30}{\vec{\omega}}=\dot{\theta}_1\vec{e}_{3}^0+\dot{\theta}_2\vec{e}_{2}^1+\dot{\theta}_3\vec{e}_{2}^2$ But I have some trouble expressing it in frame $\underline{\vec{e}}^3$. This is what I've come up with: $^{30}{\vec{\omega}}=\begin{bmatrix} 0 & 0 & \dot{\theta}_1 \end{bmatrix} \underline{A}^{32} \underline{A}^{21} \underline{\vec{e}}^3+\begin{bmatrix} 0 & \dot{\theta}_2 & 0 \end{bmatrix} \underline{A}^{32} \underline{\vec{e}}^3+\begin{bmatrix} 0 & \dot{\theta}_3 & 0 \end{bmatrix} \underline{\vec{e}}^3$ In which $\quad$$\underline{A}^{21}=\begin{bmatrix} cos(\theta_2) & 0 & -sin(\theta_2) \\ 0 & 1 & 0 \\ sin(\theta_2) & 0 & cos(\theta_2)\end{bmatrix}$,$\quad$ $\underline{A}^{32}=\begin{bmatrix} cos(\theta_3) & 0 & -sin(\theta_3) \\ 0 & 1 & 0 \\ sin(\theta_3) & 0 & cos(\theta_3)\end{bmatrix}$ I'm confused about the sequence:$\underline{A}^{32} \underline{A}^{21}$, does this mean you go from frame $1$ to $2$ and then from $2$ to $3$ or should the order be reversed? There's also an expresison for $\underline{A}^{10}$ but I don't think that has to be used here. I'm confused and the reader that I have from my university isn't helping. Or I'm just to stupid to understand it. Thanks in advance
We are having an issue where after toggling the create 2 from passive to active, then waiting a few seconds and toggling back to passive while on the charger, all serial communication fails a certain percentage of the time. (Links to images of current and serial communication down below illustrating it). Detail We are working to have the Create 2 power a NUC, auto-charge and stay alive for long (multi-month) periods of time. Because we are taking power directly off of the battery (Need higher current), we found that we have to toggle the mode from passive to full and back so that it checks the voltage and goes back from trickle charge to full charging if it has been a few hours and the computer has discharged the battery. This is because the trickle charge algorithm does not check voltage as it assumes the load on the roomba is not high enough to discharge. We started with it failing this way and have now verified that if we just plug a laptop into the create through the usb serial cable with nothing else plugged in and then toggle passive/full/wait/passive, a percent of the time we also get comm failures without any separate hardware. Because the create 2 turns off after 5 minutes of no communication, we have tried toggling it every 4 minutes and see that reliably happening, so this is not an issue where the create 2 is just going into sleep mode. Reproducibility We are seeing a very repeatable issue where the serial communication (Using current generation USB serial cable from iRobot) loses serial communication with the robot some percentage of the time exactly at the point where it starts charging the robot again. After that, no packets come in from the robot for a long while. Some time later, when we have repeatedly tried to toggle from passive to full and back, communication re-appears. We have verified this using the ROS stack (create_autonomy) and also by directly echoing characters on the command line echo -e "\x07" > /dev/ttyUSB0 to reset things. When it is in this state, no software resets work, unplugging and replugging the serial cable does not work, etc. We have to take it off the charger physically (or wait for a long time until our system toggles it again and it does work. We have verified that this occurs (When the create 2 is on its charger) on: Mac only plugged in over the create cable with no power taken from the system. NUC taking power directly off the battery through a dc/dc converter. NUC taking power off an intermediate battery charging from the create. Mitigating it We have added ferrite beads on all the cables to mitigate any high frequency noise and that does not seem to solve it either. We have also tried 3 different create cables w/ different serial/usb chips and even built our own from scratch and not fixed it. We have tried separately powered usb hubs, unpowered ones and plugging directly in to the computer. We have tried it on 3 separate create 2's. I have attached links below with graphs of the output data when it fails. When there is a straight line on current/voltage, that is repeated data, showing that it was stuck and not updating. It always happens right when the system steps up the current to start charging again, so it seems to be linked to a current spike - but only breaks it 1 / 5 or 10 times - it just happens that once broken, it stays that way for a very long time. Communication fails after multiple successful toggles Detail showing exactly when communication fails After many minutes, it is working again When it has been broken, I have tried turning off our software and addressing the system by echoing data to it directly for OI to turn off, on, reset, etc and do not see communication resume correctly. If we unplug the roomba from the charger by hand (pull it off), then many times it does work, but not always. If we then reset everything by holding down the buttons, it does seem to work 95% of the time and sometimes we have even had to pull the battery itself to make it work. If someone has encountered this before, advice would be helpful. We had thought that it was ground loop issues, but given it is on so many different cables, roombas, power situations and compute devices, we are trying to figure out what is fundementally wrong with our setup. Thanks!
We have used an SMC-108 IMU sensor in a hydrographic survey. This was our first time using this sensor. While processing the data, we noticed that the roll, pitch, and heave values were not right. Turns out, this IMU sensor has to be aligned in a specific orientation to be giving the correct values. We had our IMU oriented in a different direction from what it should be (See image below). Is there any way we can correct for this misalignment and get the correct roll, pitch and heave information? Thanks in advance.
I have read through lectures/ tutorials on A* but they have all been via computer simulations. I have an autonomous wheeled robot that is traversing an unknown map (essentially, it'll be a tabletop with no obstacles but has edges it can fall off of); it has an indoor GPS system, IMU, and cliff sensors. I'm trying input a desired waypoint. Does the robot assume that everywhere is traversable, break up the map into a grid, calculate the best path, and go for it OR is it supposed to iterate this process? If the latter, how should the robot proceed in iterating? I'm thinking that it would travel in the "best path" until the cliff sensor is triggered and it will have to recalculate a new best path.
I am working on kinematics of 5-DOF robot. I already derived geometric Jacobian for position and orientation control. But for singularity analysis, I require a simplified jacobian. And that can be obtained on base of screw theory called Screw Based Jacobian. But I am not able to find the method to obtain screw based jacobian matrix; which completely explain the method to derive screw based jacobian. Any advice on where I can get it? Please help me out.
i have question about AR Drone 2.0. i am using ROS Gazebo and Python. my question is how to get velocity of quadcopter? i need velocity to count position of quadcopter. Thankyou.
I am making a line follower to follow a white line (approx 1.7 cmm wide) on a black track. I am using an array of 5 TCRT5000 (IR led+phototransistor) to detect the line. I was previously working with PID but recently I found a few papers on fuzzy logic. Some of them showed fuzzy logic being better than PID. Is fuzzy logic a better choice than PID for my case? I want my bot to as fast as possible. P.S- The bot is just following a constant width line on a track with a few slopes of 18 deg.
My question is a bit specific, because it is linked to a certain algorithm. Therefore I didn't find any other solutions on how to go about this problem. If you could refer me to research papers, instructions or anything similar which is already available in regard to this problem, I would really appreciate it. Thank you in advance for reading my post and taking your time. I am currently trying to find out how someone would go about supporting the pose estimation in a Visual SLAM algorithm, since the optimization procedure would overwrite that initial guess anyway. But what if I want to have the algorithm use the real world scale of the reconstructed camera trajectory? Suppose, I know the prefectly accurate trajectory of how the camera moved (ground truth). In my view it should be as easy as setting the new poses for every frame. Unfortunately my try to replace the camera poses with this information actually "broke" the algorithm (that is, it didn't help with the pose estimation, but lead to it failing entirely). Now, I am a bit sceptical as to whether or not this is even possible. Let's make my question more specific: There is an algorithm called Direct Sparse Odometry (compare: https://vision.in.tum.de/research/vslam/dso). It is not based on detecting and matching features (so called "indirect" methods) but operates on a direct comparison of pixel intensities (thus called a "direct" method). At the time of this writing the current open sourced version is tailored for monocular input videos / image sequences. You can see that there are actually three parts that are central to initializing the new pose: 1.) CoarseInitializer::setFirst() This is called once as tracking of the camera movement starts. Source: https://github.com/JakobEngel/dso/blob/master/src/FullSystem/CoarseInitializer.cpp#L771 2.) CoarseInitializer::trackFrame() This is called for the first few frames to initialize the scene based on the beginning of the recorded sequence. To me it seems like it is used to get the rotation and scaling of the cameras right at the beginning. This procedure uses a high amount of points / pixels to estimate the initial configuration. Source: https://github.com/JakobEngel/dso/blob/master/src/FullSystem/CoarseInitializer.cpp#L114 3.) CoarseTracker::trackNewestCoarse() After initialization is completed (around 10 frames), the camera pose is being tracked across the sequence. However, the number of points used to track the camera is reduced signifcantly to speed up the tracking. This method is executed in a loop until the end of the recording. Source: https://github.com/JakobEngel/dso/blob/master/src/FullSystem/CoarseTracker.cpp#L556 Question: How would one incoorporate the ground truth in this algorithm? Or in general: Is replacing initial poses (basically the lines in the source code where the default constructor is called to define a pose) a good idea or does it need to be more sophisticated than that? Maybe it is even possible to get a measure of how well the estimation is (something like a hessian for the poses and not for the individual points) in order to compare, but let's clarify if the fundamental basics for such a comparison can be laid out first. ;) Any input is highly appreciated. Thank you!
In most implementations of quadcopter control systems I've seen, each axis of the quadcopter is controlled independently. For example, to control the rate in the roll axis, the desired output is calculated using a PID controller with the error as input, and that output is applied as a difference in thrust between the corresponding motors. I know this works, It's what I currently use in my quadcopter, but it has always bothered me that the relationship between euler angles is ignored, so I've been trying to find a proper justification for this. Controlling the thrust of each motor we can directly control the torque vector, as described in the "Torques" section of this article. In that article the quadcopter dynamics model is described, but when it starts describing the PD control part it says this as justification for setting each component of the torque proportional to an euler angle: $\text{Torques are related to our angular velocities by } \tau = I\ddot \theta$, where $\theta$ refers to the yaw pitch roll angles vector, but unless I'm missing something that equation is valid for the angular velocity vector, not the yaw pitch roll vector. The implementation used in that article first chooses the torque vector based on the euler angles, and then solves for motor thrusts, and they obtain good results, but I don't understand the justification. Is there any justification for using the euler angles to control each axis independently? EDIT: Seeing that the equation that transforms euler angle derivatives to angular velocities is (the $\theta$ in the right is the $(\phi,\theta,\psi)$ vector) : $$\omega=\begin{bmatrix} 1 &0&-s_\theta\\ 0 & c_\phi & c_\theta s_\phi\\ 0 &-s_\theta &c_\theta c_\phi\end{bmatrix}\dot\theta$$ It seems it might just be a small angle approximation (since the matrix is close to the identity when the angles are small).
I am trying to estimate focus of expansion for a moving camera mounted in a mobile robot. I am using the method described in this method page 13-14. My aim is to use it to avoid obstacle using optical flow. I have attached the code below. Calculating the terms when I display the calculated point it keeps jumping around and isnt correct. Any advice would be appreciated. void calculate_FOE(vector<Point2f> prev_pts, vector<Point2f> next_pts) MatrixXf A(next_pts.size(),2); MatrixXf b(next_pts.size(),1); Point2f tmp; for(int i=0;i<next_pts.size();i++) { tmp= prev_pts[i]-next_pts[i]; A.row(i)<<prev_pts[i].x-next_pts[i].x,prev_pts[i].y-next_pts[i].y; b.row(i)<<(prev_pts[i].x*tmp.x)-(prev_pts[i].y*tmp.y); } Matrix<float,2,1> FOE; FOE=((A.transpose()*A).inverse())*A.transpose()*b;
A common practice in Bundle Adjustment is to reduce the state dimension by marginalizing structure or pose states to improve the optimization speed. In case 3d points(structure) $\textbf{p}_i$ are marginalized out as follows, $\textbf{p}_i$ are triangulated to calculate residual $\textbf{e} $. $\textbf{e} = \textbf{z}_{ij} - \pi(\textbf{T}_j\textbf{p}_i)$ where $\textbf{T}_j\in SE(3), \textbf{p}_i\in R^3$ are the states we want to estimate and $\textbf{z}_{ij}$ is the observed feature in $R^2$. And just optimize the pose related terms only. $\begin{bmatrix} \textbf{H}_{cc}& \textbf{H}_{cs} \\ \textbf{H}_{sc} & \textbf{H}_{ss} \end{bmatrix} \begin{bmatrix} \mathbf{\xi}_c \\ \textbf{p}_s \end{bmatrix}= \begin{bmatrix} \textbf{g}_{c} \\ \textbf{g}_{s} \end{bmatrix}$ $\bar{\textbf{H}}_{cc}=\textbf{H}_{cc}-\textbf{H}_{cs}{\textbf{H}_{ss}}^{-1}\textbf{H}_{sc}$ $\bar{\textbf{g}}_{c}=\textbf{g}_{c}-\textbf{H}_{cs}{\textbf{H}_{ss}}^{-1}\textbf{g}_{s}$ $\bar{\textbf{H}}_{cc}\mathbf{\xi}_c =\bar{\textbf{g}}_{c}$ Here my question arises. If we can calculate 3d points $\textbf{p}_i$ by the triangulation, only $\textbf{T}_j$ are the state variable to be estimated. Then, why are we bothered to calculate marginalization related terms $-\textbf{H}_{cs}{\textbf{H}_{ss}}^{-1}\textbf{H}_{sc}$ and $-\textbf{H}_{cs}{\textbf{H}_{ss}}^{-1}\textbf{g}_{s}$ instead of optimizing only poses by ${\textbf{H}}_{cc}\mathbf{\xi}_c ={\textbf{g}}_{c}$ (note that H and g are without bar). I guess ${\textbf{H}}_{cc}\mathbf{\xi}_c ={\textbf{g}}_{c}$ is enough to find the optimal poses $\textbf{T}_j$. So, my question is why do we use $\bar{\textbf{H}}_{cc}\mathbf{\xi}_c =\bar{\textbf{g}}_{c}$ instead of ${\textbf{H}}_{cc}\mathbf{\xi}_c ={\textbf{g}}_{c}$?
As far as I know, OctoMap or any other grid occupancy maps are only for mapping. How do they handle a loop closure problem? The first method came to my mind is optimizing a pose graph in the backend and send the corrected map to grid map library to fuse and build the map again from the beginning. But I dought that they do this. What's the secret?
We know that when rotational and linear velocity of the camera is high it is highly likely to get blurred images. The blurred image affects the quality of the accuracy in the feature tracking problem. Assuming that these velocities are known, how do we penalize the period of the low quality tracks? For example, let $\textbf{p}_i\in R^3$ be $i_{th}$ 3d point and $\textbf{z}_{ij} \in R^2$ $j_{th}$ projected feature at $i_{th}$ camera pose. Than, the objective function in the bundle adjustment problem is defined as $\textbf{e} = \textbf{z}_{ij} - \pi(\textbf{T}_j\textbf{p}_i)$ If we know the velocity at $\textbf{T}_j$ how do we apply this information to the optimization? An instant idea came to my mind is utilizing the velocity in terms of $f = \textbf{e}^T\Sigma_j^{-1}\textbf{e}$ where the covariance is $\Sigma_j=I_{3\times3}v$. v is velocity. It might work but looks huristic. Anyone knows a related literature?
My apologies but I've run out of ideas. I'm very new to python and stuck trying to execute the following code from this tutorial: #!/usr/bin/env python import rospy from geometry_msgs.msg import Twist import RPi.GPIO as GPIO # Set the GPIO modes GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) _FREQUENCY = 20 def _clip(value, minimum, maximum): # """Ensure value is between minimum and maximum.""" if value < minimum: return minimum elif value > maximum: return maximum return value class Motor: def __init__(self, forward_pin, backward_pin): self._forward_pin = forward_pin self._backward_pin = backward_pin GPIO.setup(forward_pin, GPIO.OUT) GPIO.setup(backward_pin, GPIO.OUT) self._forward_pwm = GPIO.PWM(forward_pin, _FREQUENCY) self._backward_pwm = GPIO.PWM(backward_pin, _FREQUENCY) def move(self, speed_percent): speed = _clip(abs(speed_percent), 0, 100) # Positive speeds move wheels forward, negative speeds # move wheels backward if speed_percent < 0: self._backward_pwm.start(speed) self._forward_pwm.start(0) else: self._forward_pwm.start(speed) self._backward_pwm.start(0) class Driver: def __init__(self): rospy.init_node('driver') self._last_received = rospy.get_time() self._timeout = rospy.get_param('~timeout', 2) self._rate = rospy.get_param('~rate', 10) self._max_speed = rospy.get_param('~max_speed', 0.5) self._wheel_base = rospy.get_param('~wheel_base', 0.091) # Assign pins to motors. These may be distributed # differently depending on how you've built your robot self._left_motor = Motor(13, 15) self._right_motor = Motor(36, 32) self._left_speed_percent = 0 self._right_speed_percent = 0 # Setup subscriber for velocity twist message rospy.Subscriber( 'cmd_vel', Twist, self.velocity_received_callback) def velocity_received_callback(self, message): # """Handle new velocity command message.""" self._last_received = rospy.get_time() # Extract linear and angular velocities from the message linear = message.linear.x angular = message.angular.z # Calculate wheel speeds in m/s left_speed = linear - angular*self._wheel_base/2 right_speed = linear + angular*self._wheel_base/2 # Ideally we'd now use the desired wheel speeds along # with data from wheel speed sensors to come up with the # power we need to apply to the wheels, but we don't have # wheel speed sensors. Instead, we'll simply convert m/s # into percent of maximum wheel speed, which gives us a # duty cycle that we can apply to each motor. self._left_speed_percent = (100 * left_speed/self._max_speed) self._right_speed_percent = (100 * right_speed/self._max_speed) def run(self): # """The control loop of the driver.""" rate = rospy.Rate(self._rate) while not rospy.is_shutdown(): # If we haven't received new commands for a while, we # may have lost contact with the commander-- stop # moving delay = rospy.get_time() - self._last_received if delay < self._timeout: self._left_motor.move(self._left_speed_percent) self._right_motor.move(self._right_speed_percent) else: self._left_motor.move(0) self._right_motor.move(0) rate.sleep() def main(): driver = Driver() # Run driver. This will block driver.run() if __name__ == '__main__': main() I get get the following error: (classic)adeoduye@localhost:~/catkin_ws$ rosrun simple_robot_navigation driver_node_controller Traceback (most recent call last): File "/home/adeoduye/catkin_ws/src/simple_robot_navigation/src/driver_node_controller", line 117, in <module> main() NameError: name 'main' is not defined BTW, it's ROS code which I have implemented in my custom catkin_ws/src/simple_robot_navigation package. I suspect it's a python issue but I very new at python and not sure how to fix it. I would appreciate any help.
I am trying to estimate the orientation of a sensor platform using gyroscope and accelerometer. I am using a Kalman filter based approach. I integrate the readings from gyro to obtain the orientation about x, y and z axis. This is will act as the process model. I am planning to use accelerometer readings as observation. I know how to estimate the angles of orientation when the accelerometer is reading only gravity, by using acceleration components in x, y and z. However, when the sensor platform is moving or accelerating (i.e. gravity is not the only acceleration the sensor is reading), can accelerometer provide orientation/ tilt estimation without the help of any other sensor? Can this approach work without using a third sensor when the sensor platform has high acceleration?
Given Data and Algorithm I have a stream of SE3 poses supplied by a basic wheel encoder odometry through ROS message passing system. Odometry publishes data in traditional to ROS ENU coordinate frame (X - forward, Y - left, Z - up) with right chirality. I present this trajectory on graphs below (TX, TY, RZ): It should be obvious that other 3 dimensions have all zeros in them, as the wheel odometry poses have only 3DoF. Then I rotate this stream of poses to another coordinate frame, customary for Visual SLAM (Z - forward, X - right, Y - down). The acquired result is shown below and is not what I have expected. More Formally: Let's say agent's camera is aligned with principal axes of agent and relative translation from camera to agent center is neglected. In other words: $P_{C_i}^{B_i} = \text{Id}$. I have a stream of SE3 transformations from current place of agent's body to Odometry World Coordinate Frame: $$P_{B_0}^{W_O}, P_{B_1}^{W_O}, P_{B_2}^{W_O}, \dots$$ I need to observe this motion from standpoint of a Visual Slam: $$P_{C_0}^{W_V}, P_{C_1}^{W_V}, P_{C_2}^{W_V}, \dots$$ So I do this in a following manner: $$P_{C_i}^{W_V} = P_{W_O}^{W_V} \cdot P_{B_i}^{W_O} \cdot P_{C_i}^{B_i} = P_{W_O}^{W_V} \cdot P_{B_i}^{W_O}$$ I define $P_{W_O}^{W_V} = (\text{Quaternion}(\text{Euler}(-90, -90, 0)), (0, 0, 0))$, where Euler is intrinsic and active, and order is $(z,x,y)$. The resulting quaternion $\text{qxyz}$ is $(\frac 1 2, \frac 1 2, -\frac 1 2, \frac 1 2)$ and Euler angles choice can be verified geometrically with the help of the following picture: Results Interpretation and Question I have plenty of positive X and positive Y translational movement in the start of dataset, and some yaw rotation along Z axis later on. So, when I look at this motion from Visual World coordinate frame, I expect a plenty of positive Z and negative X translational movement in the start of dataset, and some yaw rotation along -Y axis later on. While the linear part behaves exactly as I describe, the rotation does something else. And I am bewildered by results and count on your help. It's terrible to realize that these group operations on SO3 are still a mystery to me. Comparison graphs are shown below (TX,TY,TZ,RX,RY,RZ): Sorry for the longpost! Meta: I have long pondered where to post: in Math or Robotics. I have finally decided for the latter, for the question seems very "applied".
Is there a closed-form solution of $\textbf{R}\textbf{R}_1=\textbf{R}_2\textbf{R}$ with respect to $\textbf{R}\in SO(3)$? $\textbf{R}_1$, $\textbf{R}_2 \in SO(3)$ are given. Added: I tried holmeski's solution but it fails because of rank deficiency in A matrix.(why?) The following code simulates holmeski's solution in matlab. (please correct me if the code is incorrect) clear all cTl=rotx(rand*100)*roty(rand*100)*rotz(rand*100) lTc = inv(cTl) for k = 1: 9 l1Tl2{k}=rotx(rand*100)*roty(rand*100)*rotz(rand*100) c1Tc2{k}= cTl*l1Tl2{k}*lTc; end R = calib_RR1_R2R_closedform(l1Tl2,c1Tc2) function R = calib_RR1_R2R_closedform(l1Tl2,c1Tc2_klt) Astack=[] for k = 1:length(l1Tl2) R1 = l1Tl2{k}(1:3,1:3); R2 = c1Tc2_klt{k}(1:3,1:3); A = [ R1(1,1) + R1(1,2) + R1(1,3) - R2(1,1) - R2(1,2) - R2(1,3), R1(2,1) + R1(2,2) + R1(2,3) - R2(2,1) - R2(2,2) - R2(2,3), R1(3,1) + R1(3,2) + R1(3,3) - R2(3,1) - R2(3,2) - R2(3,3), R1(1,1) + R1(1,2) + R1(1,3) - R2(1,1) - R2(1,2) - R2(1,3), R1(2,1) + R1(2,2) + R1(2,3) - R2(2,1) - R2(2,2) - R2(2,3), R1(3,1) + R1(3,2) + R1(3,3) - R2(3,1) - R2(3,2) - R2(3,3), R1(1,1) + R1(1,2) + R1(1,3) - R2(1,1) - R2(1,2) - R2(1,3), R1(2,1) + R1(2,2) + R1(2,3) - R2(2,1) - R2(2,2) - R2(2,3), R1(3,1) + R1(3,2) + R1(3,3) - R2(3,1) - R2(3,2) - R2(3,3)] Astack = [Astack; A]; end det(Astack) [U S V] = svd(Astack); x=V(:,end); R=reshape(x,3,3);
With the conversion of the DD control line to BRC on the series 500/600/Create 2 I have not been able to find a reliable way to wake the Create 2 from sleep. The spec suggests not letting it sleep by pulsing they BRC control but that would not fit my Use Case Anyone successful in waking Create 2 after sleep? Thanks Frank
I did a search and couldn't find a site that stood out as RC. I was hoping you guys would be closest. Here's my question. I bought a set of Eachine EV100 FPV goggles. Every resource I've looked at said I should update the dipole antennas. These have antenna diversity and not true diversity. I'm new to FPV and can't say I know what that means. As best I understand it true diversity has two receivers and it switches to the antenna with the strongest signal, where as antenna diversity just uses one receiver and combines the signals somehow to improve it. I could be completely pulling that out of a hat, though. It might not be right at all. So one youtuber upgraded his, using a clover leaf style and a patch antenna. Another youtube source I trust suggested using two clover leaf styles. I'd like to know if you can use a clover and a patch together with just antenna diversity or if I would be better off getting the two pack of clover leafs the other guys recommended. I know the clover leaf is for picking up signals in 360 degrees where the patch is much stronger when it's pointed directly at the source. I just don't know how this stuff works, so I don't know if it works like that with these goggles or if they actually interfere and make it worse. If it doesn't cause issues, I'd think the patch and clover leaf would be the way to go. Thanks for any help you can provide, even if it's telling me the correct forum to post this in.
Below is the axis setup for the elbow joint manipulator. I am trying to workout the DH parameters for the manipulator. Green frame represents the base and frame1. Starting from there, I am able to get the correct values up to joint3 (purple frame). If I rotate the frame about Z, so the blue frame moves, it gives me incorrect values. Below is the DH parameter table. My question is, is the DH table setup correctly? If I move the frame about q3 I am getting invalid results. You will have to set q2 to -90 to achieve the configuration in picture. +---+-----------+-----------+-----------+-----------+-----------+ | j | theta | d | a | alpha | offset | +---+-----------+-----------+-----------+-----------+-----------+ | 1| q1| d1| 0| -90| 0| | 2| q2| 0| a2| 0| 0| | 3| q3| 0| a3| -90| 0| | 4| 0| d4| 0| 0| 0| +---+-----------+-----------+-----------+-----------+-----------+
I have been considering different distance sensing technologies for the following circumstance: Range needs to be about 2.5 meters The surface being measured is a wall that is painted black The angle between the wall and the sensor would be at worst around 10 degrees, usually around 45 degrees (so reflective technologies will have trouble, to add to the already black wall which will absorb most light). Preferably I would like to avoid IR because the environment is already going to have a lot of IR noise. Essentially I want to be able to triangulate the position and angle of an object placed in a rectangular box (about 2.5m x 1.8m) with black walls, with a lot of IR noise. All the technologies I have considered have issues: From what I understand ultrasonic will fare very poorly with the angled walls TOF / lidar modules will do poorly with the black (low reflection) walls. Infrared sensors will be interfered with by the IR noise. I could solve the angle issue by placing 6 or so of these sensors around the object so there are sufficient sensors with a mild angle to get accurate measurements, but the trick is eliminating the sensor readings that are inacurate due to sharp angles. Any thoughts?
Working on stabilizing a Quad using Arduino Due. I have slightly modified one of the examples of Jeff Rowberg's library to give yaw, pitch and roll angles (zero initialized angles). Moreover, I am using Servo library's writeMicroseconds() to send pulses to bldc's through ESCs. One observation I have made is that there is quite a difference between the starting pulse values of different ESCs. Roughly, the numbers for my 4 bldc's are 1650, 1680, 1720, 1780. So, in case the same pulse duration value of say 1780 is sent to all ESCs, one of them would just start, while the other three would be spinning at decent rpms. My aim is to implement PID control the speed of motors by measuiring these angles ( there is no remote control involved ). Also, the quad needs to rise up to a certain height setpoint and hover there.So, the difficulty I am facing is that; my output speed looks something like this (just taking 1 motor here) speed += ( ± Kp_yaw*yaw_error ± kp_pitch*pitch_error ± Kp_roll*roll_error ±height_speed) //Height speed equals a small value, say, .5 if current height < setpoint height else it's -.5 if current height > set height, otherwise 0, if height has been achieved. The ± depends upon motor's sense of rotation. Also, currently, I am using only P gain. The P constants for yaw, pitch and roll are different for all the 4 motors. the quad initially struggles for a few seconds to balance itself and rise, but eventually overturns towards the weakest motor (1650 one) each time. What's wrong with this code? Also, speed variable is also different for each motor and initialized to value of 1600 for each motor.
I'm currently working on a teleoperation code for crazyflie. I'm not using a joystick, but I'm using leapmotion and I'm trying to understand how they code for a joystick in order to implement the same idea in my code with leapmotion. I understand the general idea of what I'm supposed to be doing; 1) subscribing to the leapmotion data; 2) writing a transformation function and then 3) publishing the goal position to be sent to my controller. Coding is not my forte and having to deal with ROS is adding to the fun of all of this. So I'm going to ask some questions regarding this code because I don't understand it well. #!/usr/bin/env python import rospy import tf from geometry_msgs.msg import PoseStamped from sensor_msgs.msg import Joy from math import fabs lastData = None def joyChanged(data): global lastData lastData = data # print(data) if __name__ == '__main__': rospy.init_node('publish_pose', anonymous=True) worldFrame = rospy.get_param("~worldFrame", "/world") name = rospy.get_param("~name") r = rospy.get_param("~rate") joy_topic = rospy.get_param("~joy_topic", "joy") Why is it necessary to get the joy_topic param? Also, I'm not sure if this is correct but I believe that the x, y, z values are the position values for the joystick, so since this values are always the same they are probably using them as initial values for the position values. x = rospy.get_param("~x") y = rospy.get_param("~y") z = rospy.get_param("~z") rate = rospy.Rate(r) msg = PoseStamped() msg.header.seq = 0 msg.header.stamp = rospy.Time.now() msg.header.frame_id = worldFrame msg.pose.position.x = x msg.pose.position.y = y msg.pose.position.z = z yaw = 0 quaternion = tf.transformations.quaternion_from_euler(0, 0, yaw) msg.pose.orientation.x = quaternion[0] msg.pose.orientation.y = quaternion[1] msg.pose.orientation.z = quaternion[2] msg.pose.orientation.w = quaternion[3] pub = rospy.Publisher(name, PoseStamped, queue_size=1) rospy.Subscriber(joy_topic, Joy, joyChanged) while not rospy.is_shutdown(): global lastData if lastData != None: if fabs(lastData.axes[1]) > 0.1: msg.pose.position.z += lastData.axes[1] / r / 2 if fabs(lastData.axes[4]) > 0.1: msg.pose.position.x += lastData.axes[4] / r * 1 if fabs(lastData.axes[3]) > 0.1: msg.pose.position.y += lastData.axes[3] / r * 1 if fabs(lastData.axes[0]) > 0.1: yaw += lastData.axes[0] / r * 2 Why is he dividing the values from the joystick with the rate? This is where I get lost because I don't know how I would translate this leapmotion yaw, pitch and roll values. I guess I could write an if statement saying if the pitch is within this values then hover at 1 meter. quaternion = tf.transformations.quaternion_from_euler(0, 0, yaw) msg.pose.orientation.x = quaternion[0] msg.pose.orientation.y = quaternion[1] msg.pose.orientation.z = quaternion[2] msg.pose.orientation.w = quaternion[3] # print(pose) msg.header.seq += 1 msg.header.stamp = rospy.Time.now() pub.publish(msg) rate.sleep()
I'm interested in Robotics but not very skilled in Maths. Since I always come across some very intense looking mathematical formulae in robotic reports, books etc I just wonder what branch of Mathematics I should be learning to better understand the material I'm trying to read? Is there a specific branch of maths relevant to robotics OR do I need expertise across multiple branches of maths? Is expertise in Maths necessary at all to get into robotics? Please help.
I am trying to drive the differential drive kinematics of Turtlebot 2. However, I couldn't find the distance between 2 wheels on the datasheet. I'd appreciate if anyone could provide me this dimension. Thanks.
I'm currently working on the iRobot Create 2 platform using a RaspberryPi (Python) and ROS. I have an indoor navigation/ GPS system, which can provide me with x,y coordinates within its coordinate system. I have thus far implemented a Kalman filter in MATLAB for practice and now am trying to implement on the Create. I also have an IMU but haven't yet implemented that. I'm currently trying to figure out how to subscribe to the topics from the Marvelmind indoor nav system (but that's a different issue). My Kalman filter is using [x, y, xdot, ydot], and I believe those should be in the global frame (which I'm taking to be the coordinate system provided by the Marvelmind indoor nav system). That being the case, I can easily get my x and y position from that system; however, I'm not sure what to do about the xdot and ydot. Currently, I have that information from the Create odometry (Twist msgs), but those are in the local frame (since the robot can only go in the x (forward) direction and can't go in y (side to side)). Do I need to transform the local to the global? If yes, do I need to use the IMU to get the angle to use for the transformation? I feel like I have many pieces, but I'm just not sure how to piece them together. Any advice would be appreciated. Thank you!
I am trying to design a system to measure snow depth during snow production on a tiny ski resort in Denmark. My idea is simple: Drive a rover over the ground during the summer, and obtain height data with lat long. During snow production, drive it again, and compare the height data with the height data from the summer. Provide snow depths to the people working, such that snowproduction can be optimal. However, I need 1-2 cm accuracy in the vertical direction for this to be useful - and my readings have led me to believe I need to use a base station and RTK to get the needed accuracy. I expect the rover speed to be no more than 2 m/s (although higher speeds would be great, of course). I will (try to) design and build the rover to fit the project, so weight is not an issue, and I can probably add pretty large batteries to it. (The rover will need to reliably climb 30% snow covered inclines, and be able to either use GSM for communications or be able to communicate at least 500 meters using radio. It will also need to be robust enough to be covered in artificial snow during operation, etc.) I have read a lot of blog posts and whatnot, that wants to use RTK lib and low cost receivers, but this question : Low-cost centimeter accurate satellite positioning (GNSS/GPS) - seems to suggest that this product : http://navspark.mybigcommerce.com/ns-hp-gl-gps-glonass-rtk-receiver/ - is sort of the solution to all my problems, wrapped in a simple package? (Its sort of cheap too, I think, at least not very expensive.) The question is from 2014 though, and it seems a lot have happened in this space since then? I am sorry to ask a question that may be somewhat opinion based, but would a solution based on two NH-HP-GL be suitable for this project? (Denmark is in Europe, so I reckon it would have to be the GL model, right?). If yes, any recommendations on models, given the target speed of 2 m/s? Also: I have not decided on what computing platform to base the rover on - any comments on Arduino or Raspberry (or entirely different) for this project? The biggest unknown in this project to me is the high accuracy GNSS, but radio communication from the base station to the rover is part of this and something I have not done with neither Arduino or Raspberry - so thanks for helping out.
While researching different physics engines for dynamic simulations in robotics I found the following statement on the MoJoCo documentation website: Physics engines have traditionally separated in two categories. Robotics and biomechanics engines (MATLAB Robotics Toolbox, SD/FAST, OpenSim) use efficient and accurate recursive algorithms in generalized or joint coordinates. However they either leave out contact dynamics, or rely on the earlier spring-damper approach which has fallen out of favor for good reason. Gaming engines (ODE, Bullet, PhysX, Havoc) use the modern approach where contact forces are found by solving an optimization problem at each time step. Link: http://www.mujoco.org/book/index.html What is the "good reason" for spring-damper not commonly being used anymore to model contacts in dynamic simulations?
I am building my first quadcopter and have multiple questions. I am using a Crius MWC Multiwii SE 2.6 flight controller which will get input from a Raspberry Pi Zero. I have this 4in1 ESC and this PDB. The battery is LiPo 4S 45C 1550mAh. 1.) Since the 4in1 ESC has one output for each of the 4 motors. Can I connect these to the corresponding inputs on the MultiWii that would normally be used if I had 4 separate ESCs? When using 4 separate ESCs each would have additional Ground and Voltage which would be missing when using the 4in1 ESC. My idea was to connect the 5V input and GND of the MultiWii to the PDB but I am not sure if this will suffice in powering the entire MultiWii. I want to connect the Raspberry to the MultiWii via TX/RX Pins, same question as as before: Do I need additional GND and 5V here if I connect the entire Multiwii to the PDB? 2.) I wanted to connect the Pi Zero also to the PDB. I read that its more secure to use PWR USB port. I am thinking of sacrificing an USB cable and soldering the ends to the PDB. Is this really more secure than using the GPIO Pins and can I power both, Raspberry and MultiWii through the PDB? Thanks in advance
For the head in a 2D plotter I have 4 gears in an X configuration in the same plane. The gears are driven by belts from the outside in different combinations. The top and bottom gear can be turned in opposite direction as indicated by green arrows. Or the left and right wheels can be turned in opposite directions as indicated by the red arrows. Both of those can happen simultaneously. The gears can also be driven all in the same direction as indicated by the blue arrows. Or all directions can be reversed. This is part of a larger construct and the belt movement already has effects there so they can't be changed. Now what I want to build is some mechanical contraption that turns the blue arrow movement into an upward (or downward if reversed) movement to rise (or lower) the pen of the plotter. If all wheels turn anti clockwise the pen should rise. If all wheels turn clockwise the pen should lower (or vice versa).
I am working on an ABB IRB120 simulation using MATLAB Simulink. I use the decoupling method to solve the inverse kinematic of the robot. However, I have some questions related to the computation of the last three joints (spherical wrist). I have read many books and I found that the solutions for the last 3 joints are not the same for theta 5,theta 4, theta 6 consequently. My question is that: Those formulas that I found in books or in the INTERNET, aren't they the same, or dependent on the D-H parameters of the robot (how we choose alpha, adding special angles 0,pi,pi/2 to Theta...). If they are dependent on the D-H parameters, then how could I choose the easiest posture to solve the robot. a. In the book Robot modeling and control (Siciliano), he chose the the final coordinate frame for the anthromopophic arm’s end-effector with X pointing upward (No special angle added to Theta6). b. However, in RoKiSim, X points downward (Pi is added to Theta6). Thus, I have looked at other papers and I saw different formulas. The question is: Why they choose these postures, and how they got those formulas. When I run the simulation, I saw that there are some points, the Theta 4 and Theta 5 rotate 180 degrees (change of signs), however, the end-effector is the same. Is there anyway for me to know if my code is correct (sample codes/ papers/ softwares…) so I can know I’m still on the right track. I have built the robot and ready to implement the code at any time. I have built the controller and the only thing left is the kinematic itself. Should I keep enhancing my decoupling method inverse kinematic or move to analytical solutions, or any method that suit me. Please give me advice on this (I use 32bit ARM chip for the controller).
I am currently trying to run the test cow and lady dataset with Voxblox. The process appears to execute properly in the terminal, but I cannot view the mesh in RViz. When I try to add the topic voxblox_node/mesh to RViz, the topic name is greyed out and I cannot add the topic. This topic is listed under "unvisualizable topics". In Rviz, I tried creating a topic of type MarkerArray and manually typing in the topic name to add it. This does not work and I get the following error: process[voxblox_node-2]: started with pid [18452] [ INFO] [1523646547.806476760]: Opening /media/ubuntu/jetsontx2b/data/voxblox/data.bag [ INFO] [1523646548.104770697]: Static transforms loaded from file. T_B_D: 0.971048 0.15701 -0.180038 -0.000432202 -0.120915 0.973037 0.196415 -0.0522007 0.206023 -0.168959 0.96385 -0.0341353 0 0 0 1 T_B_C:1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 [ERROR] [1523646548.172363508]: Client [/rviz_1523646021870801813] wants topic /voxblox_node/mesh to have datatype/md5sum [visualization_msgs/MarkerArray/d155b9ce5188fbaf89745847fd5882d7], but our version has [voxblox_msgs/Mesh/ca70fabcc211b61f795cb5e7c8210eb6]. Dropping connection. [ INFO] [1523646549.056998208]: Updating mesh. The Voxblox documentation states that the node voxblox_node publishes: mesh of type visualization_msgs::MarkerArray. A visualization topic showing the mesh produced from the tsdf in a form that can be seen in RViz. Why doesn't Rviz recognize mesh as a MarkerArray? What do I need to do to get Rviz to visualize /voxblox_node/mesh? Thanks.
We are very new in the game of robotics and we are dealing with a power delivery issue for our motors. We are using two 12V DC motors to give motion to our robot. We are using the L293D motor driver to control the motors. But motors are not receiving enough power so that they may give enough torque. I measured the power output to the motors and it is 3.7V. For power we are using an 9V DC battery. We are using an Adriano UNO board as our microcontroller. Full list of parts used:- Ardriuno UNO 9V battery L293D motor driver UV sensor Now how can we deliver addition power direct to the motors using the L293D motor drivers. Please help Code: https://github.com/pantharshit00/robo
I am confused about autonomous drone flight. Thats why I am asking the question here so that a drone expert can help me to understand. I have heard about tow kinds of drone drones that work through controller based on radio transmitter GPS based drones that work together with autopilot and make autonomous flight possible. Is there any third category available or do I understand it wrong? I mean if you have an autopilot then can you make autonomous flight. Do radio transmitter based drone usually don't have autopilot and GPS facility?
I am planning to build some primary level of project like home made robot, which I need to explore an image recognition and path recognition(using four wheels vehicle). So I am looking to buy a device like Raspberry pi. What other equipment or hardwares I should buy along with raspberry pi. Which one is better Arduino or raspberry pi for robotics using AI.
I'm making a consumer camera prototype and want to make the front look as clean as possible. Right now I use a completely transparent material and the camera and the IR Lights (with their red hue) are clearly visible. I'd like to move to a dark (as black as possible) material that makes the cover look flush and hides the sensors as much as possible. What materials should I consider, if I want to minimize the impact on what the sensors perceive?
I have written simple RRT planner however I am not sure how to apply it to a robotic arm path planning. The issue is that of analytical solution absence to inverse kinematics problem. Let me explain. There are two possible spaces which can be used to plan path: Joint space. Since we know exact joints' angles at each planning step it allows us to easily account for end effector orientation and collisions. However it requires goal position to be defined in joint space which requires inverse kinematics solution but I am not sure how to solve IK for the exact angles without analytical solution which not always exists. I use inverse jacobian method which is iterative and requires small timestamps(i.e. trajectory and not a geometric path) to have precise movement therefore it is not clear how to use it to calculate goal position in joint space. Operational space Does not require goal to be defined in joint space however on each planning step it is also necessary to solve IK to be able to account for orientation and collisions. The only way I can see is to make RRT step small enough to be able iteratively compute IK on each planning step however it casts doubts on performance. Question So the question is how can I account for orientation and collisions(including self collisions) when planning motion without analytical solution to inverse kinematics problem?
I'm in the process of learning Tensor Flow, and I really want to use it on a new robot I'm going to build. I want the robot to be able to do image recognition, and move towards an image of interest, or follow a path, using neural networks. Is this even possible with tensor flow? Is the Arduino capable of running neural network frameworks, such as Tensor Flow? Thanks.
I have an iRobot Create 2 and am currently using the create_autonomy package. Currently, they have cliff sensors as a "planned" feature. I see in the documentation that there are 4 cliff sensors (which can return 0 (no cliff) and 1 (yes cliff)) and their "Packet ID" numbers. It also mentions this note for each sensor: "NOTE: This packet is a binary version of the “Cliff Front Left Signal” (ID: 29) packet." I don't understand what this means or if it's relevant to what I'm doing. How can I read this cliff sensor data (in order to publish a topic with it)?
I want to use information together received from the different ROS subscriber nodes. One way I think of is to make separate callback functions for each subscriber node and then receive messages from corresponding nodes, store info of messages in some global variable and the use other function for manipulation of those variables or there is a better way to do this.
I am running ROS Kinetic, on a Turtlebot3 Burger with a Raspberry Pi3. The goal here is to remotely control the Turtlebot from another Raspberry Pi. The remote controller Pi sends movement information over a socket connection. I have delay problems when publishing into /cmd_vel (60 seconds delay) and again delay of 60 seconds from when the Twist message is echoed in the topic /cmd_vel to when the Turtlebot starts moving. I cannot locate why this 2 x 60 seconds delay occurs. I think that I might be pushing the raspberry pi too much. If I do the following steps: The Turtlebot3 is the ROS MASTER. 1. From remote SSH into the Raspberry Pi --> roslaunch turtlebot3_bringup turtlebot3_robot.launch 2. From remote SSH into the Raspberry Pi -- >roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch Everything works fine no delays. When I do: The Turtlebot3 is the ROS MASTER. 1. From remote SSH into the Raspberry Pi --> roslaunch turtlebot3_bringup turtlebot3_robot.launch 2. From remote SSH into the Raspberry Pi -- >python post_office.py //python script posted below. 3. From remote SSH into the Raspberry Pi -- >rostopic echo /cmd_vel The messages get into /cmd_vel with a 60 seconds delay. The Turtlebot3 starts moving after 120 seconds delay. When I do: The REMOTE PC is the ROS MASTER. 1. roscore on remote pc 2. From remote SSH into the Raspberry Pi --> roslaunch turtlebot3_bringup turtlebot3_robot.launch 3. From remote SSH into the Raspberry Pi -- >python post_office.py //python script posted below. 4. Remote pc -- >rostopic echo /cmd_vel The messages get into /cmd_vel with a 0 seconds delay. The Turtlebot3 starts moving after 35 seconds delay. env | grep ROS gives the following when Turtlebot3 is ROS MASTER: ROS_ETC_DIR=/opt/ros/kinetic/etc/ros ROS_ROOT=/opt/ros/kinetic/share/ros ROS_MASTER_URI=http://192.168.1.39:11311 ROS_PACKAGE_PATH=/home/pi/catkin_ws/src:/opt/ros/kinetic/share ROSLISP_PACKAGE_DIRECTORIES=/home/pi/catkin_ws/devel/share/common-lisp ROS_HOSTNAME=192.168.1.39 ROS_DISTRO=kinetic Here is the post_office.py running on the Turtlebot3. import socket import json import rospy from geometry_msgs.msg import Twist import thread import time HOST = '' PORT = 50007 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() print ('Connected by', addr) twist = Twist() lin = 0.0 ang = 0.0 pub = rospy.Publisher('/cmd_vel', Twist, queue_size=5) #queqe size can be adjusted maybe def publish_cmd_vel(): while not rospy.is_shutdown(): #checking the rospy.is_shutdown() flag and then doing work. You have to check is_shutdown() to check if your program should exit (e.g. if there is a Ctrl-C or otherwise). twist.linear.x = lin; twist.linear.y = 0; twist.linear.z = 0 #liniar has to be .x value to change twist.angular.x = 0; twist.angular.y = 0; twist.angular.z = ang #angular has to be .z value to change try: pub.publish(twist) except: print('unable to publish') def recv_from_controller(): while True: global lin, ang move_bytes = conn.recv(1024) #receive information as bytes move_info = json.loads(move_bytes) #decode into a dictionary lin = move_info['lin'] ang = move_info['ang'] print(type(lin), lin, type(ang), ang) thread.start_new_thread( recv_from_controller, ()) rospy.init_node('post_office', anonymous = False) while True: publish_cmd_vel() Any ideas to why I get these delays?
I am developing C++ code to estimate roll and pitch of a camera using accelerometer and gyroscope. The roll, pitch and yaw are in my state space ($X_t$) and the process is modeled as: $\bar{X_t} = X_{t-1} + Eu$ Here $u$ is a vector of gyro rates in x, y and z axis while $E$ is the matrix to convert gyro rates to Euler rates. Note that even though yaw is part of the state space, it is not corrected by accelerometer. It is present so that the short term reliability of gyros can be made use of, in the future development of the project. Now, the covariance matrix($P_t$) is calculated as: $\bar{P_t} = GP_{t-1}G^T + Q$ and the Kalman gain($K_g$) is calculated as: $K_g = PH^T(HPH^T + R)^{-1}$ The $Q$ is the process noise matrix and $R$ is the observation noise matrix. The $G$ is the Jacobian of process model. The observation model $H$ is an identity matrix (This model gets multiplied by state space vector to produce predicted observation. The actual observation contains roll and pitch angles calculated from accelerometer, and yaw angle which simply is a copy of predicted yaw). I am getting good results in most poses of the camera. However, when one of the angles (roll or pitch) is close to zero, I see drift or zero crossing patterns in the plots of roll or pitch. Can I avoid them by better modeling or tuning parameters? I would like to know if there are any systematic methods for modeling: 1) The process noise $Q$? I am currently using a Gaussian matrix with mean = 0 and std deviation of 1. I used this as reference. How can I model them better? What should be its order of magnitude? 2) All diagonal values of $P$ are set to an initial value of 0.05 to represent uncertainty in initialization. The initialization of state space vector is done by calculating roll and pitch values from initial readings of accelerometer. The yaw is initially set to 0. I prefer the accelerometer to be trusted more than gyro. Hence the initial uncertainty, 0.05, is a value lower than the lowest value(0.059) in $Q$. Is this a good approach? Are there better ways? 3) The observation noise $R$? Right now, I have calculated std deviation of accelerometer readings in x, y and z. I use their square as first, second and third diagonal elements of $R$. The rest of the values in the matrix are 0. As I type this question, I have realized that I should first convert accelerometer readings to roll, pitch angles before calculating std deviation. However, are there better suggestions to model this? Edit: 1) In the above situation only either roll or pitch are close to zero. Not both. 2) My intention is to code the Gaussian noise models in C++, rather than using readily available functions in Matlab or Excel, if they are the most suitable models. I am also looking for suggestions on better models, if any.
if we just examine the translational dynamics (X and Y direction), the dynamics are coupled, So if I want to design a PID controller to control the position, the outcome will be unpredictable because of the coupling.I some research papers , I have seen people assume (psi =0 ) at the equilibrium point to remove the coupling in x and y dynamics. But I would like to design a controller along with the couple system? Is there any solution like decoupling the dynamics equations or something.
I try to understand some MATLAB code (see below) i found from the web for inter-robot collision avoidance. But I'm not really understand, how the theory behind this code work. Can anyone help me explained the theory behinds this code? % Collision avoidance using distance-based velocity scaling (scale by alpha) % along the vector between an agent and the agent nearest to it % 1 | _______ % alpha | / % | / % 0 |___ / % 0 R2 R1 ++++ % distance % u: velocity % p: position function u = avoidCollisions(states, n, u) R1 = COLLISION_RAD*3; R2 = COLLISION_RAD; dist = Inf; % checking for the nearest robot for j = 1:N d = norm(states(n).p-states(j).p); if (j ~= n) && (d < dist) k = j; dist = d; end end pDiff = states(n).p - states(k).p; if dist > R2 alpha = (dist-R2)/(R1-R2); else alpha = 0; end if dot(u, pDiff) < 0 v2 = project(u, [pDiff(2); -pDiff(1)]); u = alpha*u + (1-alpha)*v2; end end % Vector projection of b onto a function v = project(b, a) v = dot(a, b)/sum(a.^2)*a; end
I am preparing for Trinity Firefighting Robot Contest America. and I need fast legged robot of the max size 30cm X 30cm X 30cm that use RX-24 servo from Robotis. The arena will be maze and perfectly flat. My robot upper body is perfectly symmetric so its lower body or leg doesn't need to rotate its face to another direction but still need turn its motion for maneuver inside the maze.
It is easy to understand how a robot can avoid obstacle if the goal point is not within the obstacle (Point A) as shown in the figure. But I'm curious how could a robot know that the the goal point is within the obstacle (Point B)? For example in an area exploration using random motion, the way-point is generated randomly. If the generated way-point located inside the obstacle, how could robot know? How to detect this problem?
When thinking about robots that emulate animals, I would think roboticists would try to emulate humans which have distinctly forward jointed legs. Yet, I see in many instances that robots have more bird-like legs with shorter femurs. Why are robots designed with these legs? What advantage do they provide over more human-like legs?
I thought marginalization does not change the number of the edges but this material (page 11) describes that as a result of marginalization we will have more edges in graph. Why does it increase the edges? For those who does not want to open the page, it is written as follows. Marginalization: Cons Con: More edges in graph Feature with N observations leads to O(N2) edges Slower/harder to solve, (Information Matrix less sparse)
Our team is planning to go for European Rover Challenge (ERC) 2018 for which we are designing an Autonomous rover. In order to implement SLAM, we need a way of mapping (with a range of atleast 2-4 meters) for which we considered using a LIDAR or a stereo cam. Could anyone suggest a good, cheap and accurate enough LIDAR, Stereo cam or some other option by which we could achieve this?
I'm looking to start a project that incorporates some form of state estimation and path planning for a simple simulated robot dynamic model, in an environment that contains obstacles. I'm hoping to use the combination of state estimation and path planning to allow the robot to efficiently navigate through its environment from an arbitrary starting position A, to another arbitrary ending position B, but was unsure where to start. With regards to the state estimation, I thought it would be good to implement a variant of SLAM (possibly Fast SLAM if it isn't too complicated), but I'm quite lost about where to start when it comes to the path planning side of the project, since there seem to be many different ways to do it. The first algorithms that seem to pop up are variants of A* and RRT*, but I was wondering if there are any "state-of-the-art" algorithms that may allow for real-time path planning. My previous work has looked at the use of convex optimization for optimal guidance and control of various dynamic systems, but it seems that using convex optimization would be very difficult in a highly constrained environment (i.e. environment with lots of obstacles). Any help would be much appreciated.
I would like to freely move my UR10 arm in response to an external force, just like the zero-g mode for Baxter robot, which can be activated by holding its wrist. The Baxter documentation on zero-g mode says the following: Zero-G mode can often be confused with the mode obtained by disabling the gravity compensation torques. By default, the gravity compensation torques will always be applied when the robot is enabled. In Zero-G mode, the controllers are disabled and so the arm can be freely moved across. In this case, the effect of gravity would be compensated by the gravity compensation model applying gravity compensation torques across the joints, there would be no torques from the controllers since they would not be active, and so the arm can be moved freely around, hence the name. So apparently this means that the controller torques need to be disabled in order to achieve the zero-g mode. I am using ur_modern_driver for my UR10 arm. Any ideas on how can I implement this mode with the running modern driver?
I remember seeing a video of the Prusa i3 Mk3 printer. It can detect when the stepper motor misses a step, so it can home itself again. I would like to know how this is done! There are a few options I can imagine: This can be done using standard stepper motors and stepper motor drivers, the detection is done entirely in software. This can be done using standard stepper motors, but requires some extra circuitry as well as software. This can only be done with special stepper motors and other components and special software. Which option is correct? Any of the above, or something completely different? If you have any references, such a links, articles or blog posts on how to do missed step detection, it would be very interesting also!
It should be possible to take the formal description of robotics components (in form of ontology, formal API descriptions or other kind of formal specification of the capabilities and requirements of this component) and import them in universal design studio and then use this studio for the integration and the building of the final robot of the final robotics systems. There are BPEL, component service architecture and business services in software development that offers to build the final software by simply integrating available services. So - my thought is - that there should be similar ecosystem for building robots and robotic systems. I have heard about ROS and that each component provider tries to create ROS API/interface for its own component to facilitate the easy use of its component from the ROS-based robot. I am looking something like that but with self-description capabilities (as BPEL services self-describe themselves). And I am looking also for the Integrated Development Environment that can use those component descriptions, that can import them, that can model and simulate and build (compile) the final robot desing. There are lot of CAD software for architecture, for mechanical design, for EM design but is there CAD for final multipurpose robot? May dream is to use off-the-shelf components and compose in optimal way using https://math.stackexchange.com/questions/1083338/structural-design-meta-optimization-is-there-mathematical-theory-optimiza and https://en.wikipedia.org/wiki/Structured_prediction In some cases I dream to achive the optimal robot and in some cases I dream to arrive the formal specification of the component that is not yet available but whose design (e.g. for 3D printing or other kind of manufacturing) I can derive from design of my robot. I know that Stack prohibits asking for direct recommendations of some software of components but in my case I am not asking for direct recommendation. I have idea about workflow of robot development and I am just interested: How acceptable and desirable is such workflow?; Is such workflow already implemented in one or other way?; Maybe my idea about workflow is the failure and I should look for other kind workflow (what?) which is already implemented in the community and which achieve the same goals that my idea.
Hi all , i got stuck in a problem and really need your help. Goal: I want to move little Toy vehicle autonomously (localization and mapping) and want to get distance covered by vehicle using DSO in real time. For now i want to move vehicle in square of 2x2 meters using dso_ros. Work Done: I'm getting coordinates of live camera by using dso_ros, I'm getting a matrix (camToWorld.matrix 3x4() in publishPoseCam SampleOutputWrapper.h), where first 3x3 is for rotations and last 3x1 is for translation(x,y,z if i'm not wrong). When i move camera straight forward from any position then i got change in X-coordiante (it starts from 0.000 and ends on 0.7 when camera moved 2 meters forward ). Then i stopped it and take turn and move forward for 2 meters and so on (to draw square of 2x2 meters). Problem: When i take first turn (left/right) and then move it forward then no one coordinate gets change or update, dso gives me coordinates continuously but with very minor changes. I don't know which coordinate now i have to use to calculate distance after taking turn, so i get lost when i take first turn and not be able to draw square. Please help me with this and guide me where i am wrong, I shall be very very thankful to you. I have been stuck in it from past 2 weeks. Thanks.
Im trying to implement an ackermann motion model which estimates the x,y and theta for a robot I have. I have a gazebo simulation running which publishes a steering angle for the virtual tricycle wheel and I have a linear velocity for the back wheel. I publish the ground truth odometry and transform which I can display in Rviz. I then use the published values to compute a deltaX, deltaY and deltaTheta for a pose update. If the steering angle is 0, it works fine as the deltaX = linear velocity. However, when I have both values non zero, my robot moves in an arc, but my motion model turns on the spot instead. The formulae for motion estimation and pose update are taken from "Simultaneous Localization and Mapping for Mobile Robotics" by Juan Antonio Fernandez-Madrigal. Any ideas where I have made a mistake? Edit: I will provide the formulae I use here $ u = (v,alpha)$ , where v is the linear velocity form the driving wheels and alpha is the steering angle (in radians) $ l = wheel base$ i.e the distance between the front and back wheel $ w = \frac{v*sin(alpha)}{l}$ $ dx = \frac{l * sin(w)}{tan(alpha)}$ $ dy = \frac{l * (1-cos(w))}{tan(alpha)}$ Pose Update: $ x_{new} = x + (dx*cos(theta) - dy*sin(theta))*dt $ $ y_{new} = y + (dx*sin(theta) + dy*cos(theta))*dt $ $ theta_{new} = theta + w*dt $ Ofcourse if alpha = 0 then $delta_x = v$ edit #2. I fixed the problem, but I do not understand the solution. The first issue was that I did not incorporate the linear velocity into my equation when rotating. This gives : $ dx = \frac{v*l * sin(w)}{tan(alpha)}$ $ dy = \frac{v*l * (1-cos(w))}{tan(alpha)}$ Now the robot drives in an arc! But the arc is wrong. Now I changed the update equation to: $ dx = \frac{v*l * cos(w)}{tan(alpha)}$ $ dy = \frac{v*l *sin(w)}{tan(alpha)}$ The rotational arc now seems correct. The rotational speed is still lower (not such an issue), but I dont understand why this works rotation works. I.e. why the $1-cos(x)$ factor gets changed to $sin(x)$ etc.
For a longer time I'm looking for some kind of robotic set, which is programmable, but most things I've found are related to child education or involves a lot of putting all the hardware parts together by yourself. I'm working as a Software engineer for a longer time, so I have a good knowledge of software languages also on c/c++ . I've taken several steps in on the Raspberry pi which was very interesting but shortly it turns out I'm not that much interested in soldering hardware parts or something like that. Can you guys give me some advice on where to start with robotic Programming for adults, without getting to deep into elektronics or hardware engineering, if possible, but to conenctrate on Programming? Any advice would be great Thanks