instruction
stringlengths 40
28.9k
|
---|
I have known for a while now that robotics is something I am very passionate about. Im beginning my studies now in university and am trying to decide between the best major to prepare me for advanced robotics work.
I'm currently in between Math/CS ECE/CS and Stats/CS
I know that Mech E is not for me.
Thoughts on the matter? What is the best for general development? Research/ theoretical? Applied robotics?
Specifically I would love to hear how each major may be more useful to each aspect of robotics development.
Thanks
I understand this post lacks a specific problem, but I figured I may ask any way given the collective knowledge of this community - if there is a more appropriate place to post this please let me know!
|
Texture is very helpful for stereo matching. However in real environment, untextured areas always exit. Consistent semi-global matching is proposed to deal with untextured regions. I start to read the paper Stereo Vision in Structured Environments by Consistent Semi-Global Matching. Fixed bandwidth Mean Shift segmentation is used.
If some have some experience with Consistent Semi-Global Matching, I hope to learn whether the algorithm is good for untextured regions. How about the complexity of the algorithm? Is there any better way to deal with untextured regions?
|
I am currently working on a project for a Lego Mindstorms EV3 autopilot using Matlab/Simulink. Basically, I am using a closed loop control system with a PID controller for the control of the control surfaces. I'm using Simulink to construct the autopilot block diagram. The feedback loop consists of the gyro sensor. Now the gyro sensor is not accurate in the sense that it has some offset. It does not have any bias or noise. I need to get rid of the offset to give me the actual angle of the device. How could I solve this problem? I could use a low pass filter but how do I know what transfer function to use in Simulink?
|
How does one go about testing the robot once it is built? How does one predict the number of hours it can operate? I see most of the industrial robots for instance robotic arms have warranty of 12-18 months, how did they arrive at such an estimate, clearly testing for 12-18 months is not on option, so what is the procedure for determining the lifespan?
|
I foud tons of answer how glue with syringe, but I need to glue syriges together to be airtight and glue some silicon tube there too and cannot find a hint how to do it.
Does anybody know what glue does stick with syringe (eventually dissolve it a little)?
|
I am considering writing a program to communicate with a Thermo Scientific F3 robot arm in order to eliminate their obsolete C500C controller. Does anyone know the communications protocol to/from the F3? All I know is that it is RS485.
Thanks
|
I would like to build a simple mobile robot with differential wheels, and I am currently design the wheel speed controller. After reading some papers, I noticed that to realize a straight line moving, the linear speed and angular speed of the mobile robot have to be controlled at the same time, which makes the system a Multi-input-multi-output (MIMO) system. I plotted two different controller structures I came across while reading materials. Both of them have angular speed feedback and control, but one with linear speed feedback and control, the other without. In the picture, Gl(s) and Gr(s) refer to the motor transfer function, and vl and vr are the measured wheel speed.
Would anyone please suggest which controller structure is more reasonable and can realize better straight line moving?
Updates
v* and w* are linear speed reference and angular speed reference respectively that could either be fixed values or come from trajectory generation; corrected a typo in the motor block in structure 2, as @Chuck pointed out, and changed G1(s) and G2(s) to Gl(s) and Gr(s) for better illustration.
|
I have been looking around in the forums, but could not find any answer for how to go about getting the wheel odometry covariance matrix for a custom built planar robot (I found some posts related to EKF, but could not find any clear solution). I need this in order to for e.g. fuse wheel odometry with other types of odometry, etc... I really would only need the covariance for the global planar velocities (which are directly related to the encoders/inputs), and determining the position or acceleration covariances seem to be able to be derived from there.
I will try posting the answers to this as I find them, and perhaps this could also help future roboticists (hobbysts!?) that want to build a new mobile robot but may be confused about this.
Let's say I have a custom planar robot with N number of motors/wheels/encoders, and a defined kinematic model.
That is, I have a mapping:
(Vx, Vy, AngVel) -> ( W1, W2, ..., Wn)
where W's are each motor's angular velocities. I am not sure if the inverse mapping always exists, but I could assume that it does just for now.
By reading around the forums, I found that first we should calibrate for the systematic errors (e.g., due to unequal wheel diameters, etc..). For differential wheeled robots, this can be done with the UMBMark algorithm. This still does not give any specific information on how to get the covariance matrix tho.
I imagine there are two options, using a static covariance matrix (predetermined by calibration), or dynamically adjusting them (let's say through a Kalman filter).
A static covariance matrix is probably less accurate, but simpler to determine. However, I have no idea how to go about choosing these values (should I make the robot move back and forth several times and use the error as the Vx variance?). Are there any basic guidelines for filling up the covariance matrix statically?
Another (more difficult) option seems to be to use a Kalman filter, and update the covariance matrices dynamically. But I am unsure what to choose as the inputs, nor white gaussian noise values for process/observation.
Imagine there's some sort of local controller that I just give desired angular velocities and it tries to produce them. Should I go as low level as defining my inputs as the currents, and then go through the motor model? Or should I just choose the inputs to be my commanded angular velocities?
But if my inputs are the desired angular velocities, then the state doesn't seem to depend on the previous state, and doesn't follow the kalman filter convention for the process (i.e. the new state would only depend on the input, as I am controlling the planar velocities directly from the wheels, and it is not affected by the previous state)!?
At least the sensor model seems to be quite easy to derive from the kinematic model.
As you can probably see, I am extremely confused for something that most likely has to be determined for most mobile robots out there. I am finding little to no clear documentation, which is weird for such a (very common!?) problem. If anyone could point me in the right direction I'd be extremely happy!
Thanks!!
|
In the camera module assembly process, parameters of camera modules vary due to manufacturing tolerance.
Camera calibration is performed to obtain actual parameters.
In the paper Effects of camera alignment errors on stereoscopic depth estimates, the
author analyzes relative sensitivity/importance of camera calibration/alignment parameters
on the performance of stereoscopic depth reconstruction.
For dual camera system, five sources of error are listed
Binocular error effects:
depth error due to rotation/roll between two cameras
depth error due to pitch between two cameras
depth error due to yaw between two cameras
Monocular error effects:
depth error due to nonparallel CCD array and lens
depth error due to lens distortion.
In practical applications, camera rectification
will use these parameters to align two images.
Why do we need to analyze the effect of various
errors?
|
I am trying to 3D reconstruct a room that has been freshly constructed but the walls have not yet been either plastered or painted/wallpapered. So far I have tried using mapping techniques(like rtab map) on KinectV2 but they don't work as these techniques rely on features to stitch point clouds.
I am currently looking at buying either Structure sensor or Google's Project Tango on Phab2Pro. Since, these are a little expensive (post shipping and customs) I want to be sure of a few things before I begin experimenting.
Do these sensors use something other than features to register point clouds (the phone's accelerometer, for example)?
Is one sensor better at it than the other at this job? If so, why?
If any one of you could somehow attach a point cloud or an image captured using these sensors, it would be of plenty help. Also, feel free to suggest better alternatives.
Thanks!
|
There are certain tracking devices for cameras on the market these days. Here is one example.
The concept is, you wear a tag or a wristband and the tripod knows to track you while you're out surfing or racing around a track or running back and forth on a soccer field. I always assumed these work via GPS. But there is this other very recent question where it's been implied that tracking technology has been around since the 60s. While military GPS has probably been around that long, it also occurred to me that GPS perhaps doesn't have the high level of accuracy one would need to track precisely.
I'm curious to know what sort of technology these personal tripods use? How does it track its target?
|
I follow the tutorials Rethink Robotics MoveIt tutorial to install my baxter with MoveIt. When I run
$ ./baxter.sh
It shows:
EXITING - Please edit this file, modifying the 'baxter_hostname' variable to reflect Baxter's current hostname.
I don't have a real baxter robot,how can I simulate it with MoveIt ?
|
I've tried to drill a number of holes in my iRobot.
I've missed one of them and I need the electronic scheme of iRobot Create 2 for trying to restore this input of the data cable.
Can you help me with that, please?
|
I'm building a kite flying robot, for which I want to measure air speed. So I combined a Pitot tube with an ADC and connected it to my Raspberry Pi using this tutorial.
It seems to work perfect, in that it receives data, but I'm not sure what this data tells me.
I get a constant stream of numbers, mostly between 502 and 504 when nothing happens. When I blow on the tube the number increases to 550-600, or even up to 1000 when I put my lips on the tube and blow with force.
My question is now; what does this tell me about the air speed in meters per second? 500 seems to be zero air speed, but what does 550 or 600 tell me? Is there some kind of conversion table for this?
All tips are welcome!
|
As irobot do not sell the create in Australia and will not ship to Australia I am considering buying a second hand roomba to convert to a create.
Is this possible, what models should be avoided and any advise?
Thanks
|
I don't know if this is a purely "robot" question or a more DIY/hackish one, but let's give it a try.
I currently have a set of cards that I want to sort based on several criteria.
My setup includes:
MeArm A 4 DOF robot arm
Raspberry Pi 3 + Shields for controlling the arm
A mini vacuum pump, hold in place by the gripper
These are the steps:
Move the arm on top of pile of cards
Turn on the vacuum pump
Pick the first card
Move the arm in the right spot
Turn off the pump and let the card fall
Repeat
Everything is working fine, my only main issue occurs when I'm lifting the arm. It seems that between the cards there are some kind of forces and under the first one, several others come up, attached.
I tried to shake the arm and make them fall, but it's not working.
Any suggestions? Maybe I'm missing some simple/obvious solution.
|
Starting in the code and through the hardware, what is the "path" that explains the robotic movement. Are there electrical signals involved? How they are initiated and formed and/or interpreted then by the "machine"/robot? Can you explain what happens from code to robotic action?
|
I'm looking at the data sheet for a DC motor that states:
Current consumption at nominal torque (mA): 380
I have a power supply that can deliver 500 mA. Can I take the above statement to indicate that the motor will never draw more than 380 mA, or does it mean that it usually uses 380 mA, and that I should probably choose a different power supply?
|
I am using an IR camera to track N mobile robots driving about on the floor. Each robot has a few IR LEDs on its head in known locations, all at the same height above the floor. Each robot has 5 degrees of freedom, X, Y, theta, rotation rate, and velocity. All the camera sees is a bunch of blobs. I have a working blob detector, and can calculate the coordinates of visible blobs in world space. Now I would like to implement a particle filter.
I have two options:
Implement a single particle filter with a state space of 5xN dimensions.
Implement N particle filters with 5 dimensional state spaces.
My feeling is that 1. is the correct way to approach the problem, because otherwise each particle filter could easily get confused about which particle belongs to which robot. But, on the other hand, it seems like a lot of dimensions, and could be slow.
|
I need to control 24 servos but I do not know how to do with I2C.
I saw some componentes that do it with USB, but I need to do it with I2c.
I am working in a robot with 8 legs and 3 degree of freedom in each leg, so i need to connect and control 24 servos to any arduino board, could be possible with an arduino mega and a sensor shield like that show in https://arduino-info.wikispaces.com/SensorShield but I want to do it using two "16-channel servo drive" like this https://www.adafruit.com/product/1411, each of these can control 16 servos using only 2 pins from the board and it is “Chain-able” design so i could connect 2 of this to an Arduino Uno board or Raspberry but I do not know how to do.
Any one can help me with I2C chains connections.
|
Trying to set up Hector SLAM with the RPLidar A2. I downloaded both rplidar_ros-master and hector_slam-catkin; extracted them into my catkin_ws/src folder and ran catkin_make. Then I edited the mapping_default.launch file and changed the next to last line:
<node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster"
args="0 0 0 0 0 0 base_link laser 100" />
Then, after running the roslaunch command on rplidar.launch and on tutorial.launch RVIZ starts but fails to generate a map and gives the warning:
No tf data. Actual error: Fixed Frame [map] does not exist
Do I need to add the 'map' Fixed Frame to the tf node in my mapping_default.launch file?
|
In the stereo camera system, two cameras are needed and should be mounted side by side. I see someone just glues two cameras to a wooden board. However one mobile phone manufacture claimed that the two lens of dual camera modules on phones produced by them are parallel within 0.3 degree. Why do two lens on mobile phones need such high precise assembly? Does this will bring any benefit?
|
I have a .bag which contains recorded messages on topics /topic1 and /topic2. The messages have /world as frame_id, so both of the messages associated with these topics are stamped, i.e. they have a header.
The same .bag file also contains recorded messages on the topic /tf (of type tf2_msgs/TFMessage). These transform messages have the frame_id set to /world and the child_frame_id set to the local frames associated with the IMUs from which the messages are being sent over respectively topic /topic1 and /topic2.
Now, I need the messages sent over the /topic1 and /topic2 to be converted to their corresponding local frame (i.e. the child frame or child_frame_id ) from the (fixed) frame /world. Since both the /tf messages and the messages of the topics /topic1 and /topic2 are stamped, I thought we could do this without much trouble, but I'm not sure since I'm very new to ROS.
I've looked around for various solutions, but I didn't find an exact solution for my problem, maybe because I didn't recognize it as such, given my limited knowledge of ROS, as I said.
I would appreciate a step by step description of the approach and, if you don't want to write a full solution with code (preferably in Python), at least point me to the similar examples. Please, do not suggest me to read the tutorials of /tf, I've partially done it, and it didn't help much.
|
I plan to make a mouse or a gesture control robot like this video on YouTube : ABB Externally guided motion.
For a 6-axes robot, I could implement it by using ABB’s EGM (Externally Guided Motion) option, which allows to send a Cartesian position and pose of TCP, and all the tedious calculation was handled by its controller.
However, when I started to work on YuMi, I noticed that EGM’s position guide control cannot be applied to the 7-axes robot (For YuMi, a joint control mode in EGM is only available). Are there any recommendations to implement what I described above?
Also, I'm guessing that I need to implement IK class to get the correct joints' angles from a desirable TCP position. Maybe using OpenRAVE or OMPL? If you have any recommendation to calculate IK / Inverse Jacobian, please let me know too.
|
I have a Robotic Arm with a camera mounted above it looking down at a slight angle.
Assuming I know the height of the camera, the angle of tilt and the small distance from the center of the robot which is considered (0, 0) what else would I need to convert the image coordinates to the distance from the center of the robotic arm?
I am also assuming all the objects will have a z=0 because they will be sitting on the same platform as the arm.
I have the inverse kinematics worked out to control the arm, I just need to give it coordinates to move to.
If it helps I am using Python and OpenCV.
Edit: Clarification
|
I'm working on a hexapod that uses A3C to learn how to walk. Ideally I would test it all in a simulator for some structure to the weights/policy but I don't have enough time for that. Obviously there are specific degrees of freedom that would hit each other at certain points, so how could I implement a failsafe that stopped certain movements without messing up the algorithm? If I were to just not allow a movement if I thought it would be dangerous after the algorithm but before the movement, would that disrupt it?
|
I want to track a robot's orientation in space and wanted to choose quaternions for their many advantages.
However I have a few questions that I haven't found to be solved anywhere. The method I use to get quaternions from a rotation matrix is the one by Bar-Itzhack (2000). I want to use the "version 3" method always whether or not the rotation matrix is imprecise since the method for precise matrix (version 1) also involves almost the same computational effort (contructing some matrix and getting eigen*)and this way it is more robust if my matrix happens to be imprecise. My questions regarding quaternions are the following:
How unique are they when tracking in 3D space? Can I track the rotations of the tool frame without worrying about going through discontinuities in space? (E.g. like with axis-angle representation when the angle gets close to 0° or 180° and even is undefined for those) And no arbitrary outcomes.
In the method mentioned above a special matrix is constructed from the rotation matrix and then the eigenvector of the highest eigenvalue is used as the resulting quaternion. I wanted to confirm the correctness with the following test. However the resulting fixed-angle is often negative. So I started to just negate the quaternion but I suspect that there may be cases where this is wrong, so what is the method to determine the sign? This is my verification method:
Get rotation matrix of a fixed rotation around an axis (e.g. +42° about x)
From this rotation, apply the linked method above (version 3) to get the quaternion.
Get a rotation matrix from the quaternion back (method used by Craig)
And finally I convert the rotation matrix back into fixed angle representation and see if the angle is the same.
Any help would be much appreciated.
On this project I am not using ROS, everything is self-build.
|
I'm just starting up with IMU's and I really want to work on my own flight controller, but a question always hits my mind and I am not able to find answer anywhere, so I'm here.
Will multiple IMUs will help improving stability of a quadcopter? averaging out the values of all the multiple IMUs should reduce the drift, which is a function of time, but I have no experience with IMUs and just cant figure out about the amount of error correction by adding one extra IMU, will it be just additive? or Exponential?
This question was also posted on the Electrical Engineering Stack Exchange site.
|
According to this document from Gazebo's (Beginner:Overview) online web site, for the installation of Gazebo version 7.0 on Ubuntu OS, these are the following System Requirements:
A dedicated GPU (Nvidia cards tend to work well in Ubuntu)
A CPU that is at least an Intel I5, or equivalent.
At least 500MB of free disk space,
Ubuntu Trusty or later installed.
Then, I would like the following questions to clarify some doubts that I have:
How much of RAM memory is needed for good performance?
Has anyone already tested Gazebo with Intel HD Graphics?
|
I have a transfer function and i want to calculate the speed of each pole and i do not know how to do that
|
I am implementing a 3D pose estimation algoriothm on mobile device (Android) which has Gyro, Accelerometer and Magnetometer sensors. I have already develeoped a Visual SLAM algoirthm to estimate full 3D camera pose. I want to estimate same pose just by using these sensors.
I have seen the code for EKF based sensor fusion techniques, Attitude estiamtor, etc. But none of these give full 3D Pose. Insted these give only orientation (and not scale and translation)
Could any one suggest an open source C++ implementation (Not using ROS) for the problem?
Few links which I have already found:
https://github.com/simondlevy/TinyEKF
https://github.com/AIS-Bonn/attitude_estimator
|
I have a Large linear actuator from Hiwin, the serial number is LAS3-1-1-500-24GE. the data sheet for the LAS3 can be seen on page 18 of the following link: http://www.hiwin.com/pdf/linear_actuators.pdf .
I would like to be able to use this to generate some small sinusoidal motion in the actuator. The speed of this does not matter.
I am looking to control this using an arduino uno and an H-bridge along with a power supply. For an example of the H-bridge: https://www.amazon.co.uk/gp/product/B00M1JZ7HY/ref=ox_sc_act_title_1?smid=AONS7HEF348I5&psc=1
What would be the most convenient method of generating this sinusoidal motion? is an arduino and the linked H-bridge appropriate?
|
How do I create a push and pull mechanism using a standard hobby servo? Eg. SG-5010
Preferably without the need of 3D printing.
|
I'm trying to build the dynamic model of a 6DOF robot, and the company that has built it, kindly provides a document having the masses, centres of mass, principal axes of inertia and principal moments of inertia taken at the center of mass, taken at the center of mass and aligned with the output coordinate system, and taken at the output coordinate system (I've come to known that this was obtained from a tool in SolidWorks)
The robot has 6 actuators responsible for the motion of each one of the 6 links available. The problem that I have here is the way I should calculate the inertia matrix $M(q)$. Since the matrix has to have a 6x6 dimension, I know that I have to do some kind of "combining" between one link and the correspondent actuator.
The problem is that I don't really know how can I find the respective centre of mass between the two "objects" and the respective moment of inertia of the "multiobject". I've seen people saying that it is simply the summation of the respective moments of inertia but they need to be in respect to the same orientation and translation.
Can anyone shed some light into this? Suggestions would be greatly appreciated.
Thanks
|
I'm familiar with Chebychev-Kutzbach-Grubler method to determine degree of freedom of a robot arm. But it seems this method fails to calculate the mobility of some parallel robots, as explained here.
However I cannot understand screw theory well and I do not know how to apply it to determine DOF.
So I wanna know what is the idea behind screw method to determine DOF?
And could anyone explain with a simple example how Screw method works?
EDIT:
Could you please explain how we can determine total DOF of ,for example, SCARA Arm via screw method?
|
This is my first project using Bluetooth module HC-05. I am using two of these modules. One is connected to an Arduino Nano(slave) and another one is connected to an Arduino Uno(master). I have paired them through AT commands.
For testing, I was giving a pulse in pin 7 on the Nano. If there is a pulse, then slave will send character "1". If not, then it will send "0".
In the Uno there is an LED connected to pin 13. If the master receives '1', then the LED will on and if receives '0', the LED will remain off.
here is my source code ----------
slave code=>
master code=>
The Arduino IDE isn't showing any bug, but the code is not working at all. Although I am giving a pulse in pin 7 of the Nano, the LED is remaining off in the Uno. I am at a loss now. I have a lot to do after this and my project submission is knocking at the door. Please help me as soon as possible.
|
How robot industry perceives the idea about the use of universal humanoid robots for the agile, rapidly reconfigurable manufacturing and services?
Are there examples of such use? E.g. are there examples of use of Nao robots or similar robots in food industriy (where manual work is required) and in hotel services?
And do the developers of humanoid robots take into account the potential use of their products in the manufacturing and services?
Aparently the manufacturing worflow rapidly evolves and universal, multi-functional robots can be especially suitable for such use.
|
In some papers and books we can see that authors using symbols to represent robot arms. My question is, is there a convention for such sketches? If so could you provide a reference which shows how these symbols should be used?
|
I'm struggling to find the DH parameters for this PUMA-type manipulator that yield the same results as the author (1):
The way I'm checking if the parameters I have are correct is by comparing the resulting J11, J21 & J22 matrices with the author. These sub-matrices are the constituents of the wrist Jacobian matrix (Jw).
I tried many different combinations of the DH parameters including:
α
=[0,90,0,-90,90,-90]
θ
=[0,0,0,0,0,0]
a=[0,0,a2,-a3,0,0]
d=[d1,-d2,0,-d4,0,0]
Which result in the same matrices as the author except for some minor differences. The general wrist Jacobian matrix and the sub-matrices obtained by the author are given by:
Whereas the result I got for J11 was:
$$
\left[
\begin{array}{ccc}
-d_2 c_1-s_1 (a_2 c_2-a_3 c_{23}+d_4 s_{23}) & c_1 (d_4 c_{23}-a_2 s_2+a_3 s_{23}) & c_1 (d_4 c_{23}+a_3 s_{23}) \\
c_1 (a_2 c_2-a_3 c_{23}+d_4 s_{23})-d_2 s_1 & s_1 (d_4 c_{23}-a_2 s_2+a_3 s_{23}) & s_1 (d_4 c_{23}+a_3 s_{23}) \\
0 & a_2 c_2-a_3 c_{23}+d_4 s_{23} & d_4 s_{23}-a_3 c_{23} \\
\end{array}\right]
$$
And for the J22 matrix I got:
$$
\left[
\begin{array}{ccc}
-c_1 s_{23} & c_4 s_1+c_1 c_{23} s_4 & s_1 s_4 s_5-c_1 (c_3 (c_5 s_2+c_2 c_4 s_5)+s_3 (c_2 c_5-c_4 s_2 s_5)) \\
-s_1 s_{23} & c_{23} s_1 s_4-c_1 c_4 & -c_5 s_1 s_{23}-(c_2 c_3 c_4 s_1-c_4 s_2 s_3 s_1+c_1 s_4) s_5 \\
c_{23} & s_{23} s_4 & c_{23} c_5-c_4 s_{23} s_5 \\
\end{array}\right]
$$
And the same J12 matrix as the author.
Perhaps the most pronounced difference here is that every Sin
[
θ2
+
θ3
]
is replaced with Cos
[
θ2
+
θ3
]
and vice versa, in addition to some sign differences.
Where am I going wrong here?
(1) Wenfu Xu, Bin Liang, Yangsheng Xu, "Practical approaches to handle the singularities of a wrist-partitioned space manipulator".
|
I am writing some logic for a PID controlled catapult (In order to improve precision). That is all fine and well. However, if, for some reason, the encoder wire disconnects, the motor spins continuously in the opposite direction, which breaks my catapult. To solve this, I would like to write a function to catch the failure of the wire, and use that to switch the runmode to not using the encoder. I have both control functions working properly. My issue is the transition. How can I detect when an encoder disconnects.
Note:
I have thought about writing a function to checks the return value of the encoder to see if it is disconnected, but I no not know what is returned by the getPosition function when the encoder is disconnected. Is it 0, is it null, or is it something else entirely?
|
Bit of an engineering question. If a snake robot is 100 cm long, and every 10 cm is another motor. Would the robot lift more weight, the longer/more motors attached to it.. or is lifting ability solely based on a single motor lifting.
|
After a few days,there will held a competition in our varsity.in this competition the task is,my robot will have to play soccer.this is a one to one manual game.I am thinking of building a robot consists of 4 wheel and the structure will look like a pyramid.but the difference between the outer surface of the pyramid and my bot is,the outer plane of my bot will be like a concave slope/curve.so that,when the opponent's bot will come to attack my bot,then it's wheel will go over my bot and will loose it's balence.I will take the chance and push that away from my path.
the dimentions of my bot must have to be between-
length=25cm
width=20cm
height=20cm
weight=3kg
but,my main problem is to make the concave slope.if the wheel diameter is large the it is impossible to make a perfect slop on which the opponent will easily ride.but if the diameter is small then it is possible to make a good slope.
now,this is the question.whether the small wheel will have any effect on the other sides of the bot like speed,friction or not ???
please suggest anything...you can..........what should I do?
|
I have been learning forward kinematics and having some trouble with coordinate systems and dh parameters with prismatic joints. Trying to work through this question. Trying to work this out I ended up with this system.
However running through matlab it appears to be wrong. If anyone is able to point out my mistake, or help point me in the right direction, would be appreciated!
|
Hey all I would like to know what type of coupler I would need for this type of servo?
The specs are:
I'm guessing that the 25T means it has 25 teeth in the part that holds the adapters?
So I would guess I would need something like this?
But I also would like to know what size those are called for my servo. I've seen them advertised as 5mm to 8mm. 6mm to 8mm, etc etc so I don't know if those are going to work? Some also just have a smooth round hole without the teeth and I'm not sure if that's something I could use as well?
|
I just wanted to know if scale is conserved across multiple images, when doing a process such as monocular odometry. I know in reality the scale tends to drift over time due to accumulation of small errors, mismatching of features and other problems. However, if I had a perfect system with perfect correspondences and perfect relative pose estimation would the scale also drift over time.
While working through the problem I believe it should not as long as each pair of images has correspondences in common with the previous image pair. However, I would like if someone could just confirm this theory of mine. Also if you have any proof as to why this is or a paper that explains it in depth that would also be very helpful.
|
I've been looking at existing this UR TCP/IP communication protocol answer, the data linked but I'm still a little confused to how I could retrieve values from calculations, for example get_inverse_kin()
I've tried to figure it out based on the available articles on the Universal Robots support site, but even that has a usage guide: "How to use this Support site" :)
I can receive the Realtime data and parse it based on the Client_Interface.xlsx specifications, but that does not include calculations done via the motion module.
The other thing I have in mind is writing a URScript along these lines:
store the result of the get_inverse_kin() in a float[] (e.g. angles = get_inverse_kin(pose_here)
make a string representation of the data (e.g. str = "{\"angles:\":[0,1,2,3,4,5]}")
open a socket to the computer to send the data (e.g. socket_open("COMPUTER_IP_HERE", 50000, "motion_results")
Send the angles string (e.g. socket_send_line(str, "motion_results"))
This feels a bit long-winded though. Is how values should be sent ?
What is the most efficient way of receiving URScript motion module results on a computer connected to the Control Box ?
|
So I have -32768 to +32768 coming out from MPU9255 (gx,gy,gz) which is converted to 0-250 dps(degrees per second) using 131 which is Gyro's sensitivity.
My question would be how do you use this data to convert it into Roll and Pitch?
I am trying to make a stabiliser. I have tried using
$$
\theta = \sum^n_n\omega*\delta(t)
$$
where n = infinity
I don't if my equation is wrong or not here is my code:
dt = now_c - pr_dt;
pr_dt = dt;
Pitch_gyro += Gxyz[1]*(dt/1000000.0);
Roll_gyro += Gxyz[0]*(dt/1000000.0);
This is my function:
Gxyz[0] = (double)(gx/131);//131;
Gxyz[1] = (double)(gy/131);//131;// 250/32768.0;
Gxyz[2] = (double)(gz/131);//131;// 250/32768.0;
Any guide as to how to solve this I have looked into euler angles. I still don't understand how you get the angles given angular velocity which is from gyro.
|
the problem I have is the following (see picture below), I am able to compute a starting frame $F_S (X_S, Y_S, Z_S)$, an ending frame $F_E (X_E, Y_E, Z_E)$ and a path from $F_S$ to $F_T$ and what I want to do it to compute a serie of transformations that will transform $F_S$ into $F_E$ along the path.
Naively I computed the Euler angles for $F_E$ end $F_S$, compute the differences and incrementally built the transformations but it does not work. Does somebody can give me some hints or pointers towards existing solutions?
The application is related to the computation of a path for an arm and so the frames are associated with the end effector.
Thank you
|
So I am working with a UR10 manipulator which doesn't have a direct torque interface. However, it provides torque/velocity/position feedback for each joint as well as position/velocity interfaces for joint control.
I have a feeling the answer is "yes", but I've been having trouble finding examples and comments on the feasibility of this approach.
Thanks!
|
Let's assume the very simple case of a particle and a control system in one dimensional space therefore our particle can move only in a straight line and dynamics of system is described by:
$m\vec{a} = u$.
Now the problem: we would like to make our particle move from point $A$ to point $B$ in time $t$ and constrain our acceleration with some value $a_{m}$ i.e. $a$ can not exceed $a_{m}$ at any moment.
How would one do this assuming that our control system allows us to control either velocity or acceleration?
The most important things here are names of mathematical methods behind this task and explanation of how to apply them.
Also consider that
$x(0) = A = 0\\ x(t) = B\\ v(0)=0\\a(0)=0\\ v(t)=0\\ a(t)=0$
|
I am researching on mechanical design of remote compliance devices (RCC)†. In the RCC, manufactured by ATI, they use shear pads for lateral compliance. They mention that shear pads are elastomers bonded with metal shims. I couldn't find how it is being made. Are there layers of elastomers between wedging pieces (shims) or something else? How is adhesion being achieved?
† From wikipedia: "In robotics, a Remote Center Compliance, Remote Center of Compliance or RCC is a mechanical device that facilitates automated assembly by preventing peg-like objects from jamming when they are inserted into a hole with tight clearance."
|
I need to connect two servo motors (with internal position control) to rotate one shaft (to control its position), is that possible? and how to avoid synchronization problems?
the servo motor
I am making a robotic arm, my problem here is that the servos are internally controlling position, which means that if only one of them arrived to the target and the other one haven't yet, it will still try to get to it (even if it is just 0.5 degree away).. i am afraid that this will make the system vibrate around the required position , or make one motor to continuously drag current, or it might hurt the servo shafts and gears,
I am thinking of using pulleys and a synchronous belts to connect the servos to the shaft, this could absorb the bad effects.
so please let me know if you consider this to be safe, or if there is something i can do to improve the performance and the reliability of the system.
|
I am able to connect the create 2 robot to my laptop with a serial cable. I am using putty_beta terminal to run. I am not able to key in commands in the terminal. but I am able to receive information from the robot. I want to achieve two way serial connection. What can I do?
|
I'm using the MPU-6050 accelerometer + gyro with the library I2Cdev which outputs: quaternion, euler angles and YPR angles. The equations used for calculating the YPR are:
uint8_t MPU6050::dmpGetYawPitchRoll(float *data, Quaternion *q, VectorFloat *gravity) {
// yaw: (about Z axis)
data[0] = atan2(2 * q -> x * q -> y - 2 * q -> w * q -> z, 2 * q -> w * q -> w + 2 * q -> x * q -> x - 1);
// pitch: (nose up/down, about Y axis)
data[1] = atan(gravity -> x / sqrt(gravity -> y * gravity -> y + gravity -> z * gravity -> z));
// roll: (tilt left/right, about X axis)
data[2] = atan(gravity -> y / sqrt(gravity -> x * gravity -> x + gravity -> z * gravity -> z));
return 0;
}
I want to stabilize a quadcopter with these values and 3 PID regulators like this:
FL = Throttle + (-PitchPID) + (-RollPID) + (+YawPID)
FR = Throttle + (-PitchPID) + (+RollPID) + (-YawPID)
RL = Throttle + (+PitchPID) + (-RollPID) + (+YawPID)
RR = Throttle + (+PitchPID) + (+RollPID) + (-YawPID)
The pitch and roll values are between -90 and +90 degrees (0 degrees is horizontal and +-90 is vertical). The problem is that when the quad starts tipping over, the error will start decreasing and will stabilize upside down.
|
I am modelling a quadrotor and I need to choose an order for the rotations that transfer vectors which are represented in Earth Frame to the Body Frame.
what is the most logical order for these rotations?
which order is likely used?
does the order have a big effect on the control of the quadrotor?
Thanks in advance for any answers
|
This is my first post here, so if I unknowingly vioated any rules, mods are welcome to edit my post accordingly.
Ok, so my problem is that following Craig's conventions, I can't seem to find the expected homogeneous transform after a series of transformations.
I have included the image for clarity.
We are given the initial frame {0} as usual and then:
-$\{A\}$ is the frame under rotating {0} $90^\circ$ around $z$ and translating, with $OA = \begin{bmatrix}
1
\\ 1
\\ 1
\end{bmatrix}$
-$\{B\}$ is obtained after translating $A$ by $AB = \begin{bmatrix}
-2
\\ -2
\\ 0
\end{bmatrix}$
What I found is:
$$ {}_A^OT = \left[ {\begin{array}{*{20}{c}}
0&-1&0&1\\
1&0&0&1\\
0&0&1&{1}\\
0&0&0&1
\end{array}} \right], \;\;{}_B^AT=\left[ {\begin{array}{*{20}{c}}
1&0&0&-2\\
0&1&0&-2\\
0&0&1&{0}\\
0&0&0&1
\end{array}} \right],\\ {}_B^OT = {}_A^OT {}_B^AT=\left[ {\begin{array}{*{20}{c}}
0&-1&0&3\\
1&0&0&-1\\
0&0&1&{1}\\
0&0&0&1
\end{array}} \right] $$
This is wrong, since the last column should obviously have the coordinates $-1,-1,1$, the origin of B
What am I missing?
|
I have two situations-
A) One, the body with sensor embedded in it kept at rest.
B) Second, the body is at rest for 10 secs, then undergoes some movement randomly and comes back to the exact orientation as the initial one (at rest) and kept there for rest for 10 secs again.
In the first case, the quaternion values are constant and that is what is expected. But in the second case, these values from the first 10 secs do not match with the last 10 secs. As the orientation is unchanged in both the situations, how can the quaternion values be different? Also, the accelerometer, gyroscope and magnetometer values for corresponding situations is same.
The sensors which I am using are accelerometer and gyroscope. I dont know the exact way how the quaternion values are getting computed from these sensor values here but I will try to give you a better understanding. So, the quaternion values initially at rest are [1,0,0,0]. If the object is kept at rest, it remains the same (should be like that) but if it moves randomly and then again comes to rest with the exact same orientation as the initial point, the quaternion values are [0.708547,-0.4962,-.4316,-0.2556]. If this is not matching, then what are the absolute quaternion values signifying?
Is there any flaw from my end in understanding the derivation of quaternion values at the conceptual level or I am missing something substantial?
|
Looking at the Li-Ion battery packs from Servocity, part #605056, 12V, 6000maH battery pack. Any reason I shouldn't put these in parallel with each other? Any idea on what these might weigh? I've got a robot project going, currently running on a very heavy 12V lead acid RV battery, essentially a small car battery.
|
I am trying to create a sphere that I can control through pybullet. I have a basic urdf specification that looks like this:
<?xml version="0.0" ?>
<robot name="urdf_robot">
<link name="base_link">
<contact>
<rolling_friction value="0.005"/>
<spinning_friction value="0.005"/>
</contact>
<inertial>
<origin rpy="0 0 0" xyz="0 0 0"/>
<mass value="0.17"/>
<inertia ixx="1" ixy="0" ixz="0" iyy="1" iyz="0" izz="1"/>
</inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="textured_sphere_smooth.obj" scale="0.5 0.5 0.5"/>
</geometry>
<material name="white">
<color rgba="1 1 1 1"/>
</material>
</visual>
<collision>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<sphere radius="0.5"/>
</geometry>
</collision>
</link>
</robot>
But to control velocity instead of only external force I need add a joint. I tried adding a floating joint:
<joint name="control" type="fixed">
<parent link="base_link"/>
<child link="internal_link"/>
<origin xyz="0.0 0.0 0.5"/>
</joint>
<link name="internal_link">
<inertial>
<mass value="0.1"/>
<origin xyz="0 0 0"/>
<inertia ixx="1" ixy="0.0" ixz="0.0" iyy="1" iyz="0.0" izz="0.01"/>
</inertial>
</link>
but pybullet crashes when trying to load that without a helpful message. I don't know urdf well. Any ideas? Thanks!
|
I have an object to whom I know the x,y,z position, length, width, height, and x,y velocities. Is there a possibility to compute the yaw angle from this information?
|
Would an Arduino Uno have the precision-timing required (using only firmware) to control a two-wheeled inverted pendulum robot, or would it need a RTOS?
Note: Thanks for both answers, they both helped a lot. I just chose the last answer as the accepted answer.
|
My robot system uses a 3D mouse to teleoperate the robotic arm's TCP. The robot returns its TCP's position every 10-20 milliseconds to the remote PC. The remote PC returns the new destination based on the current position and an input from the 3D mouse (new destination = current position + delta input from the mouse).
The problem is that the communication between the robot and the remote PC has delay around 100 milliseconds. Because of this condition, the robot can't generate a smooth trajectory, but it goes back and forth (jittered motion), or it repeats stop and start to move. I understand that this is because the current position is not updated in real-time and that leads to generating a wrong destination (even backward when the robot moves forward).
One thing I already tried was to filter out the target if the length between it and the current position was smaller than the one between the previous destination and the current position. However, I couldn't succeed because the robot requires the target update every 10-20 msec, and if not sending (or if sending the previous destination again), the robot stopped to move. So, this solution didn't work.
Does anyone know how to calculate/update the new destination every cycle in this condition? Do I need to forecast the new destination based on the past trajectory?
|
For a science project, I'm looking for a way to actuate a spring, connecting two ~ 1 cm Styrofoam spheres to each other, at frequencies around 1-60 Hz.
In other words, I want to be able vibrate 2 connected spheres around their center of mass.
What is this spring I'm looking for?
|
I am starting to work with 4 DOF Robotic Arm project.it has the following specs:
1- the speed of the tip of the end-effector is constant and adjustable.
2- the robot is controlled via joysticks which determine the direction of movement of the end-effector.
3- also the orientation of the end-effector is controlled.
to implement these specs I need a processor that can handle forward kinematics, inverse kinematics, trajectory related calculations in addition to reading from sensors and camera.
can Arduino Handle all that?
what are alternatives available?
|
I am writing a bachelor's diploma on vSLAM. I learned and programmed EKF-SLAM like it's written in MonoSLAM paper, and I was going to write, that I can't use KF and have to use EKF because of non-linearity of observation function, but wait, how is it possible, if everything is linear?!
I understand, that if I store direction of a camera in form of axis-angle vector or quaternion then there will be something non-linear, but what if I will store it directly as values of rotation matrix? Then my observation function is just going to be a multiplication of matrices, which is a linear operator, and therefore linear. Am I wrong?
|
For a project we use a brushless DC (BLDC) motor. Everything works fine until we tried to reach high velocities. I will first explain my setup, and than explain our problems using some graphs.
1.0 Setup
The following hardware is used in the setup:
BLDC motor: Tiger motor U8 (135kV)
Motion controller: SOMANET DC 1K
Encoder: RM08 12 bit absolute encoder
An overview of the setup is shown below:
1.1 Requirements and Parameters
We need about 4800 [RPM] from the motor. The Tiger motor has a kv value of 135 [RPM/V], connecting it to a 48 [V] supply means it theoretically should be able to go up to 6500 [RPM]. The specsheat includes a scenario where it reaches 5000 [RPM] while a propeller is connected to it, so 4800 [RPM] with no load should not be a problem.
2.0 Problem
We are not even getting close to the 4800RPM, a plot of the motor velocity vs phase current is shown below. We can identify 2 problems from this plot.
2.1 Inefficient commutation
The first thing which was remarkable from the test is that about 10 [A] was already required to turn at 3200 [RPM] without any load connected. This seems to be caused by inefficient commutation, we figured there are two main possible causes for this.
2.1.1 Phase offset error
Their might be an error in the phase offset used, this will cause an linear increase in required current with velocity. This can be best solved by finetuning the offset at a high velocity. However our curve does not seem linear, thus this does not seem to be the case.
2.1.2 Delay error
There is a certain amount of delay in between requesting the position from the RM08 and applying the new voltage. This delay can cause the current to exponential increase with motor velocity, which is true in our case.
By adding up al delays we found a total delay of ~0.1 ms in the system (see above). Spinning at 3000RPM = 50Hz and using 21 pole pairs means that the electrical turning frequency is 1050Hz, than a delay of 0.1 ms would cause a 37.8 electrical degree error, This likely causes the inefficiency!
2.2 Control going crazy
if we try to go above ~3200 RPM, the motor starts pulling a lot of current and makes a lot of noise. This means the motor is not operational above 3000RPM, this seems to be the most urgent problem at the moment.
Voltage dependency
Normally the motor velocity is limited by back-EMF, if the back EMF would be causing this issue the problem would be voltage dependent. Therefore some measurements where done at different voltage levels, see the two images below:
The moment where the motor stops following the velocity sweep seems to increase linearly with voltage. Another interesting outcome is that at 30V the motor just stops following the velocity sweep, while at the higher voltages (40V and 43V) the motor suddenly dropped to a lower velocity. Note that the 46V test stopped before this moment because to high current peaks were flowing trough the SOMANET (35A).
However it seems unlikely the back-EMF is the problem since Tiger has been able to reach 5000RPM themselves.
Solutions
For the first problem we thought we could use something like:
Pcorr = Penc + t_delay * Vel.
With:
Vel: angular velocity
t_delay: the delay compensation gain
Penc: the encoder position for the rotor
Pcor: the delay compensated position
However this didn't save the problem at all. Do you have any other suggestions?
For the second problem we can't think of any cause, can you think of any?
|
I am making a Ping pong-playing robot, like the one in this video, and i am intending to use two cameras to trace the ball in 3D space. Supposing that the robot is playing against a beginner player, the ball takes about 0.7 second to travel from one side of the table to the other. but during this time the robot should process number of frames and predict the rest of the ball track, and the robotic arm should move to the required position.
I read some papers talking about the same project, but I found a big difference from one to another(some paper used 30 FPS, while other one used 120 FPS). I didn't order the cameras yet, and I can't try more than once because it is a graduating project, so the time and the budget are limited.
So is there any way to predict the minimum Frame rate of the cameras?
I also heard that some projects are using Kinect instead of using two cameras or 3D vision,
Is the Kinect fast and precise enough for my project?
|
In stereo vision, the camera is calibrated using the popular chessboard before use. During the course of use, the camera maybe be subject to environmental factors such as vibrations.This may cause calibration parameters to drift. I hope to correct calibration parameters online without using the chessboard.
Currently I assume:
focal length and principal points of two camera are constant,
the distance between them is constant
only orientations of two camera changes.
The initial calibration parameters are available. Left/right images are undistorted using each individual camera calibration parameters.
Corresponding points can be established by detecting feature points and matching them from undistorted left/right images
Fundamental matrix can be calculated. I will call it calculated fundamental matrix.
Fundamental matrix can be derived from calibration parameters of stereo system, and I will call it derived fundamental matrix.
The two fundamental matrix should be same if calibration parameters of stereo system doesn't change.
Otherwise I can get rotation matrix between the equation relating fundamental matrix to ration matrix and translation between two cameras
and each camera intrinsic matrix.
I am new to multiple view geometry. Is this a viable method? Is there any other method for that?
Any link to websites and papers are appreciated.
|
I would like to know if it is possible (and if it is possible how can it be done) to estimate the yaw angle and yaw rate of a vehicle in front of me knowing the following information:
-my speed (x,y,z),my position(x,y,z),my yaw angle and my yaw rate
-the relative speed to me(on x and y) of the vehicle/robot to which I want to find the yaw and yaw rate
-the position(x,y,z) of a point on the vehicle( corner of a vehicle) the length, width and height of the vehicle
|
I've been planning to make an electric skateboard for a while now. All the tutorials that I have seen use brushless motors. However, where I live, its not possible to get the exact ones as shown in those tutorials.
The ones I can buy are somewhat like this.
The one shown in the tutorials is this.
I am new to brushless motors. The one in the tutorial in 270kv and the one I can buy is 1000kv (and above) but they both differ greatly in size. Plus there's this "outrunner" thing. The ones I can buy seem to be made for quadcopters and similar stuff.
So am I okay using the ones I can buy? The little 1000 kv ones, considering the fact that they need to run a 14T to 36T pulley system with approx 70kgs of weight on the board.
Or maybe I could just use the 12v 10Amp 1000RPM DC Motor that I have lying around?
|
I have a 6 DOF arm robot with a mobile base, and a given x/y/z/quaternion vector that the end effector must match. I am to determine the most optimal position of the mobile base such that an IK solution for the arm can be constructed from the end effector vector. I already have an IK method for the arm alone, but not one that includes the mobile base.
Not to mention, there is a collision aspect to this too. There is collision that the arm must avoid around the vector, which can easily be filtered with my simulator, but is just something that must be considered. Could anybody point out any algorithms that could possibly help? Thank you for your time.
|
I bought a qx95 and after a few flights, maybe 3, there was black smoke that fizzled out of the what looks like the flight controller. And since then, anytime I give throttle - the fpv video feed cuts signal.
I'm assuming what I fizzled was some sort of limiter / power converter for the video?
My question is - what do I need to replace? Was that the flight controller or camera? Any thoughts on what I need to replace or repair?
|
Is there an exact definition for conventional and unconventional path planning methods? What are the features that help distinguish conventional and unconventional path planning methods? What is an example of conventional and unconventional path planning methods?
|
So I have a quad with a black-box estimator on it. The black box estimates the pose of the quad. I also have a Vicon system that I'm using to get the ground truth pose of the quad. I'm trying to transform the output from the black-box system into the coordinate frame of the Vicon system so I can compare the two.
I have two series' of points recorded using the whole setup that I am trying to use to compute this transformation (from one world frame to the other world frame).
If you're not familiar, it is possible to compute a transformation between two frames given a set of points in each frame.
I have implemented the method described in the paper Least-Squares Rigid Motion Using SVD
But I'm getting wonky results:
If it's not clear, if the transformation were working correctly, the points labeled 'transformed' displayed on the graph below would roughly overlap with those labeled 'Vicon'. As you can see, they only overlap for less than half a second.
Any suggestions? Ideas about what could be wrong?
|
I have seen that it is possible to use MoveIt! for path planning and obstacle avoidance for quadcopters/quadrotors.
For example:
https://www.youtube.com/watch?v=NRzeQD_Etog
https://www.youtube.com/watch?v=VlBQLbmc03g
Is MoveIt! as suitable for path planning and obstacle avoidance for fixed-wings aircraft as it is for quadrotors?
The main difference I can think of is that quadrotors can stop at will, while fixed-wing aircraft have to continue moving to stay in the air. Will this be a problem for path-planning and obstacle avoidance in MoveIt?
|
I tried getting pack sensors for packet group 100. What I noticed was the order of the packets did not match what's in the document. Anyone notice the same problem?
For example here is the output of packet group 6 & 101 - which should be the same at 100 but that's not the case. It would seem like the order of the packets are not the same in group 100.
Here is packet group 6:
2017/05/31 193538 PACKET GROUP 6 len:52
Bump & Wheel Drop: 0 - 1
Wall: 0 - 1
Cliff Left: 0 - 1
Cliff Front Left: 0 - 1
Cliff Front Right: 0 - 1
Cliff Right: 0 - 1
Virtual Wall: 0 - 1
Wheel Overcurrent: 0 - 1
Dirt Detect: 0 - 1
Unused 1: 0 - 1
Omni IR Code: 161 - 1
Buttons: 0 - 1
Distance: 0 - 2
Angle: 0 - 2
Charging: 0 - 1
Voltage: 15140 - 2
Current: -211 - 2
Temperature: 25 - 1
Battery Charge: 2234 - 2
Battery Capacity: 2696 - 2
Wall Signal: 0 - 2
Cliff Left Signal: 2938 - 2
Cliff Front Left Signal: 2047 - 2
Cliff Front Right Signal: 1399 - 2
Cliff Right Signal: 2232 - 2
Unused 2: 0 - 1
Unused 3: 0 - 2
Charging Source: 0 - 1
OI Mode: 1 - 1
Song Number: 0 - 1
Song Playing: 0 - 1
Num Stream Packets: 0 - 1
Req. Velocity: 0 - 2
Req. Radius: 0 - 2
Req. Right Velocity: 0 - 2
Req. Left Velocity: 0 - 2
Here is packet group 101:
2017/05/31 193859 PACKET GROUP 101 len:28
Left Encoder: 10 - 2
Right Encoder: 6 - 2
Bumper: 0 - 1
Bumper Left Signal: 11 - 2
Bumper Front Left Signal: 6 - 2
Bumper Center Left Signal: 8 - 2
Bumper Center Right Signal: 0 - 2
Bumper Front Right Signal: 11 - 2
Bumper Right Signal: 0 - 2
IR Code Left: 0 - 1
IR Code Right: 0 - 1
Left Motor Current: 0 - 2
Right Motor Current: 0 - 2
Main Brush Current: 0 - 2
Side Brush Current: 0 - 2
Stasis: 0 - 1
And here is packet group 100:
2017/05/31 193654 PACKET GROUP 100 len:80
Bump & Wheel Drop: 0 - 1
Wall: 0 - 1
Cliff Left: 0 - 1
Cliff Front Left: 0 - 1
Cliff Front Right: 0 - 1
Cliff Right: 0 - 1
Virtual Wall: 0 - 1
Wheel Overcurrent: 0 - 1
Dirt Detect: 0 - 1
Unused 1: 0 - 1
Omni IR Code: 161 - 1
Buttons: 0 - 1
Distance: 0 - 2
Angle: 0 - 2
Charging: 0 - 1
Voltage: 15140 - 2
Current: -219 - 2
Temperature: 25 - 1
Battery Charge: 2230 - 2
Battery Capacity: 2696 - 2
Wall Signal: 0 - 2
Cliff Left Signal: 2950 - 2
Cliff Front Left Signal: 2198 - 2
Cliff Front Right Signal: 0 - 2
Cliff Right Signal: 3072 - 2
Unused 2: 0 - 1
Unused 3: 0 - 2
Charging Source: 0 - 1
OI Mode: 0 - 1
Song Number: 0 - 1
Song Playing: 0 - 1
Num Stream Packets: 0 - 1
Req. Velocity: 0 - 2
Req. Radius: 0 - 2
Req. Right Velocity: 0 - 2
Req. Left Velocity: 0 - 2
Left Encoder: 8 - 2
Right Encoder: 4 - 2
Bumper: 0 - 1
Bumper Left Signal: 11 - 2
Bumper Front Left Signal: 6 - 2
Bumper Center Left Signal: 9 - 2
Bumper Center Right Signal: 0 - 2
Bumper Front Right Signal: 0 - 2
Bumper Right Signal: 0 - 2
IR Code Left: 0 - 1
IR Code Right: 0 - 1
Left Motor Current: 0 - 2
Right Motor Current: 0 - 2
Main Brush Current: 0 - 2
Side Brush Current: 0 - 2
Stasis: 0 - 1
|
I had a discussion with a study colleague about the IK solver. The question was: Does IK need the current joint values to calculate the requested position.
I think it doesn't need it. From my understanding, IK only needs a translation and rotation respected to the e.g. base frame, but no information about the current joint values or e.g. gripper frame for the calculation itself. My study colleague argue that we always give the IK Solver the current joint values through our .msg file and that's right. However, I'm sure that the IK solver doesn't use it for the calculation. Maybe, in case of optimization if it finds more than one possible solution.
Would be nice if anyone can help
Greetings
R.Devel
|
This question assumes an ideal system with 100% efficiency.
Lets say I have this propeller which if spun at 1000 RPM in water will need a 48 Watt motor to drive it.
So I get a 1000 Kv 16V motor and a gear reduction of 1:16 so now it will spin at 1000 RPM and thus will use 48 watts of power.
That means that the ESC will be using 3 Amp to power the motor.
Now if I power the ESC with 48V instead of 16V and set it to 33% instead of 100%
The effective voltage on the motor is still 16V and the motor will still use 48W, but since the ESC is getting 48V it will use 1 Amp not 3.
So I want to choose a brush-less motor for an ROV propeller that will need 200W of power at 1000RPM and the voltage applied to the ESC varies between 12V and 48V.
So I choose a motor and gearbox that when 10V (headroom to allow the motor to slow down a little and still maintain enough speed to drive the prop) is applied to it, it spins at 1000RPM and it has max power of 300W.
Then in the control system I have Voltage and current sensors in the ESC and I drive the motor at constant power mode.
The system would now what voltage is the ESC running at and vary the PWM signal till reaching wanted power.
Is this right or am I missing something?
|
Is there any possibility to have a robot with less than six degrees of freedom & still be able to achieve any position & orientation around end effector?
|
I have a question about something that seems like it would be pretty basic, but so fair I haven't been able to find a whole lot of discussion on the issue. It's possible I'm not not familiar enough with the terminology.
I have a rigid body with an accelerometer/gyro IC dev board nailed to it. I would like to know what the accelerometer would measure at another point on this board, in this case, the sensor of a camera that is also nailed to it.
My thinking is that I can use the accelerator, gyroscope and differentiated gyroscope data and the equation $a_t = a_m + \omega' \times r + \omega \times (\omega \times r)$, where
$a_t$ = transformed acceleration
$a_m$ = measure acceleration
$\omega$ = measured gyroscope reading
$\omega'$ = first derivative of the gyroscope reading
$r$ = the vector between the accelerometer/gyro and the point I want transformed to.
My plan is to get $\omega'$ with a Savitzky-Golay filter, though this makes implementation a lot less convenient, because I have to buffer my data, and try to figure out how the filter effects the noise variance of the sensor.
Does this plan make sense? Is there a better accepted way that I don't know about? I'm surprised that ROS or tf2 doesn't have a built in function for this. Is there something I am missing? Thanks!
|
I have an iRobot Create 2 and I am connected to it with the 7-pin connector to a desktop machine via USB. Serial communication is working fine under that configuration. I can tell it to restart with ctrl-G and read the messages it sends back. On power up the serial port outputs the following:
2015-08-24-1648-L
r3-robot/tags/release-3.5.x-tags/release-3.5.4:6058 CLEAN
bootloader id: 4701 5652 7E52 3FFF
assembly: 3.5-lite
revision: 2
flash version: 10
flash info crc passed: 1
battery-current-zero 257
When I plug in the raspberry pi 3 board with a 7-pin connector to the on-board UART, I can communicate with the iRobot without issue. However, the iRobot starts to continuously sound "uh-oh" every few seconds (no other beeps follow) after about 20 or so seconds of powering up the raspberry pi until the robot enters sleep mode. The iRobot serial port outputs the following message to the raspberry pi every time the robot sounds "uh-oh"
4701 5652 7E52 3FFF
ERROR: language set error type 2
Does anyone know what this error means?
My last resort is to scope the serial port from the raspberry pi to check for noise; unfortunately I don't have access to my scope at the moment.
|
I have a data point array. Which is recorded at 20Hz(0.05 second. It can be 30Hz, 40Hz, 50Hz. 20Hz is an example value)
I want to interpolate this data to bigger frequency for example 1kHz(0.001 second) with cubic interpolation to get smooth data set.
y(t) = at^3 + bt^2 + ct + d
But I can't figure out how can I derive the function and implement with C.
|
I am trying to understand the stepper motor to Mach3 type software control interfaces, mostly from a logical perspective, deducing most of everything because I have no concrete resource to refer.
So basically I have purchased a "LinkSprite" 3 axis engraver. It says it comes with Arduino GRBL board which interprets G-code. Sends it to stepper driver shields.
Bottom
I can see the drivers need to send a approximate sign wave of some sort to one or both windings to actuate them.
Top
Mach3 I read essentially transmits two signals per axis to the motion controller/breakout board (which in turn is connected to the drivers), one being the number of steps the other the direction and probably via parallel ports GPIO or something.
So what underlying transmission protocol carries mach3 signals, like i2c or something via the parallel poet? How is steps and direction, and axis encoded?
What does the motion controller do exactly? Minimally does it just breakout out the signal to the drivers? What is the drivers inputs?
My Arduino GRBL board I read interprets G-code, but isn't that what Mach3 does?
How can I connect from stepper motor waveform on windings to some interface like Mach3 the encodings and concrete information of this logical path of the workings of the control of the stepper motors?
|
Let's say I have multiple (8 bit) sensors which are sending signals to two microcontrollers simultaneously. In this case, I am looking to harness the parallel processing capability of the two microcontrollers to process the signals at the same time.
However, I am curious whether I can substitute the above setup by using only one 16-bit microcontroller? Then, would sending two 8-bit signals simultaneously be possible, using the 16-bit bus on the microcontroller?
Assuming in both cases above, they are running at the same clock speed (MHz).
Edited
Sorry, this is all still new to me.
One aspect that I'd like to understand is: Can we make two 8-bit signals from two different 8-bit sensors share the 16-bit bus at the same time?
|
I would really appreciate it if somebody could help me calculate the singular configuration of this simple manipulator
I am confused since J is a 2x3 matrix and I cannot simply calculate the derivative.
Thanks in advance.
|
I'm making a quadcopter for the first time and want to be as low cost as possible, and when it comes to flight controllers APM is the best open-source project, and is compatible with Raspberry Pi.
My question is, will it perform as good as of the shelf APMs, if I add GPS, IMUs, and all of the other sensors, the same as in ready-made?
Consider that APM is clone from ebay.
|
I have a robot that is inherently symmetric in nature. Sometimes one side is the base while the other is the end-effector and vice versa. This 'mode' can change while the robot is moving around.
Judging from the URDF tutorials wiki, it looks like the commonly used URDF package in ROS is static and so the base of the robots is assumed to stay the base. Are there ways to get around this so that I can still use the TF package?
|
I have a BeagleBone Black, an Arduino Duemilanove and a WS2812b LED strip driven with 5 V.
I have no level shifters, nor I intend to order one.
The Arduino works perfectly with a LED strip, and I need the BeagleBone Black just to send simple commands like 0, 1, 2, 3, 4, 5... to make the Arduino perform one of five modes.
The BeagleBone Black's USB port is taken by other device that I need, and I do not intend to use USB splitters.
What is the best way to communicate between the BeagleBone Black and the Arduino in this situation? If I do not want to use level shifter I2C and RS232 TTL communication is impossible?
Is it OK to use for example just one wire (plus ground) and communicate via the analog port using analogWrite() on the BeagleBone Black and analogRead() on the Arduino, and is there some additional advice on communicating this way?
|
I want to use DGPS on a robot. I understand how DGPS works but I am having trouble figuring out what specific hardware I need. Is there a good resource for how to actually setup DGPS? Thanks for your help
|
In case anyone is interested, I build a dashboard for the icreate 2.(http://blog.mindfront.net/2017/06/roomba-dashboard-cli-dashboard-for.html)
|
I came across many good books on robotics. In particular I am interested in Inverse kinematic of 6dof robot. All books have example which goes on like this "given homogeneous transformation matrix as below, find the angles ?".. Problem, is how do I find components of a homogeneous transformation. matrix in real world? i.e. how do i practically derive 9 components of the rotation matrix embeded in homogeneous transformation matrix?.....
|
The idea of using Kalman gain in EKF SLAM is to figure out how much we trust our motion model and sensor/observation model. As explained in The Extended Kalman Filter: An Interactive Tutorial for Non-Experts - Part 5: Computing the Gain, the Kalman gain can be calculated as,
$$K_t = \frac{p}{(p +r)}$$
where $p$ denotes prediction error and $r$ denotes sensor noise.
Now, if we look into the equation in the image,
$$K_t = \bar{\Sigma_t}H_t^T(H_t\bar{\Sigma_t}H_t^T +Q_t)^{-1}$$
we can see that Kalman gain is calculated using Covariance matrix ($\Sigma$), Jacobian of observation model ($H$) and Sensor noise ($Q$). Comparing with earlier equation, $p$ can be considered equivalent of $\Sigma$, while $r$ can be equivalent of $Q$.
How does $H$ fit in, in this equation? What would be an intuitive explanation?
|
I am a junior web developer working mainly with Bash, Javascript, and Drupal. I'm more fascinated writing scripts and programs that do certain concrete actions instead of querying and manipulating databases.
I do desire to step into robotics in the future (after completions) and had the following question in my mind:
Do robots usually have databases (similar to these of websites in quality and quantity) and if so, please give a practical example what are they using for? Maybe in the context of machine vision or machine motion.
Update (13/11/19):
All the answers here are good in my opinion. If I could I would accept all of them, I suggest to start by reading the answer of user16549 which makes an introduction to DBs in robots, then continue to the answer by FooTheBar, and then read other answers.
|
I need an urgent help in my graduation project on unmanned guard boat.
I need to control the boat autonomously and manually using ArduPilot and Arduino Uno.
I just ask if anyone has the code, that can be written on the ArduPilot, that enables control both:
autonomously, using way-points, and;
manually, using LabVIEW,
and then sends the data through the Arduino to complete the movement of the boat as the motors are connected to the Arduino.
Furthermore I need to know to send GPS coordinates using a U-blox NEO-7M connected to ArduPilot running on an Arduino to be displayed through the serial port of PC on the software.
|
There are a lot of questions regarding this topic, but I am trying to get a more clear picture from these questions.
I am trying to calibrate a fish eye camera and I am using OpenCV cv::omnidir class functions to find the camera intrinsics.
I am getting fair results. The problem is, at the image edges, the objects get stretched and I am, also, losing some information at the edges.
Here is my input image:
Here is my output image:
As you can see, I am losing some information at the edges (left and right) and also the images start stretching at the edges.
My questions are as follows:
How I can I include more FOV in the corrected image at the edges, where the information is lost?
How can I reduce the blur effect at the edges?
During calibration, should I cover the entire FOV of the camera so that the corners are present at the edges also?
What is the correct way of showing the patterns while calibration?
Are there any online tool boxes which provides fish eye calibration.
Here is my code snippet for calibration and testing:
//Calibration
Mat K, xi, D, idx;
int flags=0|omnidir::CALIB_FIX_SKEW | omnidir::CALIB_FIX_K1 |
omnidir::CALIB_FIX_K2;
TermCriteria critia(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 200,
0.0001);
vector<cv::Mat> rvecs, tvecs;
double rms = cv::omnidir::calibrate(obj_points, image_points, image_size, K,
xi, D, rvecs, tvecs, flags, critia, idx);
//Testing
Mat R = Mat::eye(3, 3, CV_32F);
Mat Mapx, Mapy;
Mat New_camera_mat(3,3,CV_32F);
//New_camera_mat tries to get entire FOV,but it is losing some information
at edges
New_camera_mat.at<float>(0, 0) = 100; New_camera_mat.at<float>(0, 1) = 0;
New_camera_mat.at<float>(0, 2) = 1280/2;
New_camera_mat.at<float>(1, 0) = 0; New_camera_mat.at<float>(1, 1) = 100;
New_camera_mat.at<float>(1, 2) = 720/2 ;
New_camera_mat.at<float>(2, 0) = 0; New_camera_mat.at<float>(2, 1) = 0;
New_camera_mat.at<float>(2, 2) = 1;
cv::omnidir::initUndistortRectifyMap(K, D, xi_Right, R, New_camera_mat,
image_size, CV_32F, Mapx, Mapy, cv::omnidir::RECTIFY_PERSPECTIVE);
remap(distorted_frame, undistorted_out_frame, Mapx, Mapy, INTER_CUBIC);
|
I am about to build a water surface vehicle (a kind of boat).
I was looking for different type motors and ruled out the stepper and servo motors. But I got confused between brushed and brushless dc motors. Brushless motors on one hand are more efficient but brushed are low cost and less complex.
Can you explain to me or get me some resource on how to find the best motor for low as well as for high budget? I am expecting the load to be around 12 kg. Please draw my attention to the necessary requirements that need to be keep in mind while choosing the motors. Also suggest me link where I can study about water thrusters and the propellers.
|
I am currently undertaking a competition called CanSat, in which you build a satellite in the form of a standard soda can. It will in turn then be launched 30 meters, 100 meters and then finally 500 meters into the sky. I'm stuck on something and I need a little bit of help.
Basically I am looking at streaming low quality video (480p-10fps/ 240p-30fps), using an Arduino Uno/Mega, inside of a can. The spacial limitations I mentioned above are 115 mm in height and 65 mm in diameter. I will not be able to use a conventional IP camera, due to the limitations.
I was thinking about using a Bluetooth V2 Chip in order to achieve the transfer rate to stream to a PC. I am looking for the following help:
What camera to use
What software to use to receive the video on a PC
And finally any other relevant information about the limitations of the Arduino and streaming.
|
I have this servo motor (http://robokits.co.in/motors/high-torque-encoder-dc-servo-motor-60rpm-with-uart-i2c-ppm-drive?gclid=CLHf9_fCqNQCFVAeaAodhM0Ddg&). Generally, I have seen servo motor with three wires only but this servo motor have 6 wires. I want to rotate this using PPM signal and accordingly I made connection as described in motor manual. Arduino Code to rotate motor is:
// Include the Servo library
#include <Servo.h>
// Declare the Servo pin
int servoPin = 10;
// Create a servo object
Servo Servo1;
void setup() {
// We need to attach the servo to the used pin number
Servo1.attach(servoPin);
}
void loop(){
// Make servo go to 0 degrees
Servo1.write(0);
delay(1000);
// Make servo go to 90 degrees
Servo1.write(90);
delay(1000);
// Make servo go to 180 degrees
Servo1.write(180);
delay(1000);
}
But what the motor does is it continuously turning.
How to control motor position using PPM.
Thanks.
|
What is the general structure of .launch file for using a sensor?
For example:
1.Following code is example for using a JoyStick to control TurtleSim
<launch>
<node pkg="turtlesim" type="turtlesim_node" name="sim"/>
<node pkg="chapter4_tutorials" type="example1" name="example1" />
<param name="axis_linear" value="1" type="int" />
<param name="axis_angular" value="0" type="int" />
<node respawn="true" pkg="joy"type="joy" name="teleopJoy">
<param name="dev" type="string" value="/dev/input/js0" />
<param name="deadzone" value="0.12" />
</node>
</launch>
2.Using a Laser Range finder
<launch>
<node pkg="hokuyo_node" type="hokuyo_node" name="hokuyo_node"/>
<node pkg="rviz" type="rviz" name="rviz"
args="-d $(find chapter4_tutorials)/example2.vcg"/>
<node pkg="chapter4_tutorials" type="example2" name="example2" />
</launch>
What exactly is the syntax?
Why do we have two <node pkg>?
|
While formulating a state matrix of a system, say a system of a typical cruise controller,
\begin{equation}
\begin{bmatrix}
\dot{v}
\end{bmatrix} = \begin{bmatrix} -\frac{b}{m} \end{bmatrix} \begin{bmatrix} v \end{bmatrix} + \begin{bmatrix} -\frac{1}{m} \end{bmatrix} \begin{bmatrix} u \end{bmatrix}
\end{equation}
\begin{equation}
y = \begin{bmatrix} 1 \end{bmatrix} \begin{bmatrix} v \end{bmatrix}
\end{equation}
If we consider $b = 0$ (negligible), then does this state space matrix make sense any more and is it viable? A state matrix without state variable influence in equation?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.