instruction
stringlengths 40
28.9k
|
---|
Just asking a quick question about implementing some python AI. I have a home built circuit running an ATmega32u2 chip and I was wondering if itβs possible to use this code on the chip? Or do I have to code that in C?
|
I'm currently working of a 5 DOF robot, for which i'd like to compute the dynamic model through identification. I have experimental data representing torques (inputs) and positions (outputs). How can i obtain the closest dynamic model to the structure? and how to validate the obtained model. The objective is to design some non linear control using the obtained model. Could you help me out. Thank you so much. Regards
|
I'm trying to display an image on top of a box in a .urdf file. How could I achieve this?
|
I would like to have a Raspberry Pi robot that can find its location in a room relative to an origin point.
What methods would be cheapest?
What would the most accurate be?
Are there any others that might be in a sweet spot between the two that I should look into?
|
Weight of the Drone in total will be 240 Kg.
It Will be a quad copter drone setup but with 8 rotors(two on each corner) for redundancy purpose.
Aim is achieve Thrust/weight ratio of 2.
So need total needed thrust will be 500 KG.
Considering 500 kg thrust by 4 rotors only(because the 2nd rotor on each corner if fails the 1st motor should be able to compensate thrust) , each should be able to produce 125 kg thrust.
The propeller Diameter will be 1.2 m ( Total 8 propeller)
Now coming to the question:
What should be the power rating of the motor if i want 125 kg thrust from each Propeller (When running at 85%-95% of the max RPM.)
If you have any Doubts or any details please ask ,will reply as soon as possible.
|
I'm trying to model this arm and I'm having a problem knowing what way could I use to make the red part always parallel to the surface of the table
|
I'm trying to track an accelerating vehicle using a camera, an IMU, and a GPS.
I use for the state space equation a constant acceleration model:
The states are the position, the velocity, and the acceleration of the vehicle in the x- and y-direction. No input is used.
All the measurement equations are linear.
I'm wondering if I should use a discrete normal Kalman filter, or a discrete extended Kalman filter. (i.e. is my process non linear?)
|
I learned recently that robotc has a command called startMotor. it has two parameters, a motor and a speed, and it sets the motor to the speed.
But the same thing can be accomplished in one line like this:
motor[desiredMotor] = desiredSpeed;
the above line is perfectly readable, and doesn't push a whole new frame to the call stack. Why would anybody choose to use startMotor in this case?
|
Currently, I am thinking that since electromagnetic waves loose energy over distance, you might be able to figure out the relative distance to each of three radio "beacons" and use this information to triangulate the position of a Raspberry Pi robot with a radio sensor on-board.
What kind of radio receiver may I need to use, and what information about the radio signal could I use to calculate the distance to the beacon? Amplitude, perhaps? The entire system would be running indoors as an Indoor Positioning System.
In short, my question is what specific type of radio receiver would be adequate for using radio signal strength for triangulation. If amplitude contains the information of the strength of the signal, then I need a sensor that is sensitive enough to pick that up, and can provide amplitude readings in the first place.
|
I am having a bit of trouble with the pictured question:
I am able to do part a and b, however c is proving very difficult. My lecturer never really explained much beyond what is asked in a and b. Any hints about how to do it would be greatly appreciated. I have looked at inverse kinematics but that seems to be for determining the angles of the joints, not the length of links and distances.
Thanks in advance for any help!
Paul
Here is the diagram of the robot also:
|
What's a good calibration object for the extrinsic calibration (rotation + translation) of a depth camera (no color sensor, only depth)? With RGB, people seem to typically use chessboard pattern or some kind of markers. Is there something similar that would work without RGB?
|
*I'm rewriting the question after I deleted the previous one because I wasn't clear enough. Hope it's fair
Given a system of the general form:
\begin{align}
x_{[k+1]} &= A\,x_{[k]} + B\,u_{[k]} \\
y_{[k]} &= C\,x_{[k]} + D\,u_{[k]}
\end{align}
I would like to know how I should place the poles for the closed-loop observer system $AβL\,C$.
I know that the observer has to be faster than the real system poles so the poles of $A-LC$ should be more close to zero than $A+B\,K$. But I don't know about any other constraint of the position. If there isn't any other bound why we don't place the poles at 0 and make $A-L\,C$ converge in one step?
|
I am using ROS on the Turtlebot3 and try to teleop the robot with an XBOX 360 controller without success. I'm already able to teleop the robot via keyboard.
Setup:
Ubuntu 16.04
ROS Kinetic
What I've already tried:
Console 1:
[Remote]-$ roscore
Console 2:
[Remote]-$ rosrun joy joy_node
Console 3:
[Remote]-$ roslaunch teleop_twist_joy teleop.launch
Console 4:
[Pi]-$ roslaunch turtlebot3_bringup turtlebot3_robot.launch
Console 5:
[Pi]-$ rostopic echo joy
I can already see, that the Raspberry receives the command messages from the controller.
But the robot doesn't move. Any ideas?
|
I am new to the small servo area.
I bought a eBay 6DOF Robotic Arm, and 12 mg996r servos to go with it.
Using an arduino and I2C PCA9685 16 X PWM board, with the example code from adafruit, I set up 5 of the servos with conservative values for an overnight test.
2 of the servos died, and individually, zero response from voltage applied to them (no hum, no increase or decrease in mechanical moving, no heat).
Since then, I've had another one die.
Is this common on buying "bottom feeder" prices on ebay?
I was wanting to get into this area without spending a bundle, but 25% failure rates seems excessive even for ebay.
Thanks for any guidance.
|
A recent startup named Skydio develops autonomous drones for photography. The drone is named as R1 and utilizes 13 cameras to map its surroundings for localization and motion planning. The brain of this drone is Nvidia Jetson TX1. While the number of cameras that a TX1 board can support is 6. I'm wondering how did they manage to use 13 cameras.
Can anyone know the answer to this question?
|
I need some help with a project where I am trying to operate a unipolar stepper motor with a raspberry pi's using the 4 GPIO pins and using a L293D IC as an amp to get it to the operating 12V of the stepper motor. The problem is that to use the method I have which is choose the + or - pairs of each coil on the stepper and switch between them whilst using the 5th wire as a common ground, I need the GPIO pins that are not set to HIGH to not become a ground connection. Any help would be appreciated or another method that I haven't thought of would be good to.
|
I am wiring up a kids ride-on toy (Powerwheels) to control both speed and steering with an Arduino. The ultimate goal is to have the Arduino take analog inputs from the driver (steering and acceleration) as well as digital inputs from a Radio Control source. The Arduino will combine these inputs such that the Radio inputs override the driver, but otherwise the driver is free to operate the vehicle.
The vehicle is driven by 2 motors, Fisher Price part 00968-9015. From what I can find on the internet they are 12v DC brushed motors that run @ 1.25amps with no load, and supposedly hit 70 amps during stall.
I plan to drive the motors with the Actobotics Dual Motor Controller in split mode. Channel 1 will operate Steering with a 4" Stroke 25 lb Thrust Heavy Duty Linear Actuator from #ServoCity. Channel 2 would operate the above mentioned set of Fisher Price motors.
I have 2 SLA batteries that are connected in parallel for longer run time while maintaining a 12v output. The positive(+) terminal goes through a 30Amp DC circuit breaker before going to the rest of the system.
I'm a software engineer so programming the Arduino isn't a concern, but the I have to admit I'm out of my comfort zone with wiring and electronics. I could use any input on the wiring of this project. If I'm wiring something wrong, or should be using other/different components - suggestions are welcome!
Question 1: Can I run two 12v motors in parallel on channel 2 of the Actobotics Dual Motor Controller from #ServoCity? (can Channel 2 handle that load)
Question 2: Will the parallel 12v SLA batteries (as shown) be sufficient to run both motors, the Linear Actuator and an Arduino without any problem or is there a more appropriate power configuration? I would like to avoid making the worlds slowest Powerwheels car.
Thanks!
#Actobotics
#ServoCity
|
I have system of two equations that describes position of robot end-effector ($X_C, Y_C, Z_C$), in accordance to prismatic joints position ($S_A, S_B$):
$S^2_A - \sqrt3(S_A + S_B)X_C = S^2_B + (S_A - S_B)Y_C$
$X^2_C + Y^2_C + Z^2_C = L^2 - S^2_A + S_A(\sqrt3X_C + Y_C)+M(S^2_A+S_BS_A + S^2_B)$
where M and L are constants.
In paper, author states that differentiating this system at
given point ($X_C, Y_C, Z_C$) gives the "differential relationship" in form:
$a_{11}\Delta S_A + a_{12}\Delta S_B = b_{11}\Delta X_C + b_{12}\Delta Y_C + b_{13}\Delta Z_C$
$a_{21}\Delta S_A + a_{22}\Delta S_B = b_{21}\Delta X_C + b_{22}\Delta Y_C + b_{23}\Delta Z_C$
Later on, author uses those parameters ($a_{11}, a_{12}, b_{11}...$)
to construct matrices, and by multiplying those he obtains Jacobian
of the system.
Im aware of partial differentiation, but I have never done this for system of equations, neither I understand how to get those delta parameters.
Can anyone explain what are the proper steps to perform partial differentiation
on this system, and how to calculate delta parameters?
EDIT
Following advice given by N. Staub, I differentiated equations w.r.t time.
First equation:
$S^2_A - \sqrt3(S_A + S_B)X_C = S^2_B + (S_A - S_B)Y_C$
$=>$
$2S_A \frac{\partial S_A}{\partial t} -\sqrt3S_A \frac{\partial X_C}{\partial t} -\sqrt3X_C \frac{\partial S_A}{\partial t} -\sqrt3S_B \frac{\partial X_C}{\partial t} -\sqrt3X_C \frac{\partial S_B}{\partial t} = 2S_B\frac{\partial S_B}{\partial t} + S_A\frac{\partial Y_C}{\partial t} + Y_C\frac{\partial S_A}{\partial t} - S_B\frac{\partial Y_C}{\partial t} - Y_C\frac{\partial S_B}{\partial t}$
Second equation:
$X^2_C + Y^2_C + Z^2_C = L^2 - S^2_A + S_A(\sqrt3X_C + Y_C)+M(S^2_A+S_BS_A + S^2_B)$ $=>$
$2X_C \frac{\partial X_C}{\partial t} + 2Y_C \frac{\partial Y_C}{\partial t} + 2Z_C \frac{\partial Z_C}{\partial t} = -2S_A \frac{\partial S_A}{\partial t} + \sqrt3S_A \frac{\partial X_C}{\partial t} +\sqrt3X_C \frac{\partial S_A}{\partial t} + S_A \frac{\partial Y_C}{\partial t} + Y_C \frac{\partial S_A}{\partial t} + 2MS_A \frac{\partial S_A}{\partial t} + MS_B \frac{\partial S_A}{\partial t} + MS_A \frac{\partial S_B}{\partial t} + 2MS_B \frac{\partial S_B}{\partial t}$
then, I multiplied by $\partial t$, and grouped variables:
First equation:
$(2S_A -\sqrt3X_C - Y_C)\partial S_A +(-2S_B -\sqrt3X_C + Y_C)\partial S_B = (\sqrt3S_A +\sqrt3S_B)\partial X_C + (S_A - S_B)\partial Y_C$
Second equation:
$(-2S_A+\sqrt3X_C+Y_C+2MS_A + MS_B)\partial S_A + (MS_A + 2MS_B)\partial S_B = (2X_C-\sqrt3S_A)\partial X_C + (2Y_C-S_A)\partial Y_C + (2Z_C)\partial Z_C$
therefore I assume that required parameters are:
$a_{11} = 2S_A -\sqrt3X_C - Y_C$
$a_{12} = -2S_B -\sqrt3X_C + Y_C$
$a_{21} = -2S_A + \sqrt3X_C + Y_C + 2MS_A + MS_B$
$a_{22} = MS_A + 2MS_B$
$b_{11} = \sqrt3S_A +\sqrt3S_B$
$b_{12} = S_A - S_B$
$b_{13} = 0$
$b_{21} = 2X_C - \sqrt3S_A$
$b_{22} = 2Y_C - S_A$
$b_{23} = 2Z_C$
Now. According to paper, jacobian of the system can be calculated as:
$J = A^{-1} B$,
where
$A=(a_{ij})$
$B=(b_{ij})$
so I im thinking right, it means:
$$
A =
\begin{matrix}
a_{11} & a_{12} \\
a_{21} & a_{22} \\
\end{matrix}
$$
$$
B =
\begin{matrix}
b_{11} & b_{12} & b_{13} \\
b_{21} & b_{22} & b_{23} \\
\end{matrix}
$$
and Jacobian is multiplication of reverse A matrix and B matrix.
Next, author states that Jacobian at given point, where
$X_C = 0$
$S_A=S_B=S$
$Y_C = l_t-\Delta\gamma$
is equal to:
$$
J =
\begin{matrix}
\frac{\sqrt3S}{2S-l_t+\Delta\gamma} & -\frac{\sqrt3S}{2S-l_t+\Delta\gamma}\\
\frac{2(l_t-\Delta\gamma)-S}{6cS-2S+l_t-\Delta\gamma} & \frac{2(l_t-\Delta\gamma)-S}{6cS-2S+l_t-\Delta\gamma} \\
\frac{2Z_C}{6cS-2S+l_t-\Delta\gamma} & \frac{2Z_C}{6cS-2S+l_t-\Delta\gamma} \\
\end{matrix} ^T
$$
Everything seems fine. BUT! After multiplicating my A and B matrices I get some monster matrix, that I am unable to paste here, becouse it is so frickin large!
Substituting variables for values given by author does not give me proper jacobian (i tried substitution before multiplying matrices (on parameters), and after multiplication (on final matrix)).
So clearly Im still missing something. Either Ive done error in differentiation, or Ive done error in matrix multiplication (I used maple) or I dont understand how to subsititue those values. Can anyone point me in to right direction?
EDIT
Problem solved! Parameters that I calculated were proper, I just messed up simplification of equations in final matrix. Using snippet from Petch Puttichai I was able to obtain full Jacobian of system. Thanks for help!
|
I am building a robot (2 powered wheels and one ball bearing). The problem is that I can't seem to make it drive straight. I literally find it impossible, I have been trying for weeks.
I have two gyro sensors (on each wheel) and the current motor rotations.
The end result would be to have the gyros at 0 degrees, I do not really care for motor rotations.
I tried to search on Google but most results are GPS related. Is there a way I can do this?
|
I am planning to make a 4 Degree of Freedom(DOF) robotic arm. I have a confusion in mechanical design of the robot. Shall I use two 4 bar mechanisms to reduce the number of actuators? or a stepper motor on every linkage? Also, a gripper will be attached. So which will have higher payload?
|
I'm working on a wheeled-robot platform. My team is implementing some algorithms on MCU to
keep getting sensors reading (sonar array, IR array, motor encoders,
IMU)
receive user command (via a serial port connected to a tablet)
control actuators (motors) to execute user commands.
keep sending sensor readings to the tablet for more complicated algorithms.
We currently implement everything inside a global while-loop, while I know most of the other use-cases do the very same things with a real-time operating system.
Please tell me the benefits and reasons to use a real-time os instead of a simple while-loop.
Thanks.
|
Apparently, there are active and passive sensors. An infrared proximity sensors should be an active sensor, whereas a gyroscope should be a passive sensor. Why is that the case? In general, what is the difference between an active and a passive sensor? Multiple examples (with appropriate explanations) may be helpful to clarify the distinction.
|
Given a CAN bus with one master and any number of motor drivers, is there a way to measure the time between sending a command from the master (e.g. a PC running ROS) and the execution of the said command (e.g. the motor starting to move)?
This matters since it affects the maximum update rate of the robot's control loop.
|
I know, robotics may refer to many things and it's extremely large field today. But if I can narrow the topic to something like Aldebaran(the company makes Nao robots), what knowledge should I have as a founder and manager of this company?
I really love to have a company like that, and know somethings from OpenCV to ARM-microcontrollers programming, and few experience in Solidworks and Altium designer!
But as you know this is a very big field and the more I read/learn, the more I'm disappointed! I think I can't to reach a point that I can say tomorrow I will create a team and start my first robot project, Cause I think there are many many things that I don't know and many skills that I don't have!
So it made me to ask this question here to know, what knowledge or skills really needed to establish some company like Aldebaran or similar companies?
NOTE: I don't talk about simple robot projects like line following robots, I am talking about Humanoid intelligent robots with Machine Vision, Listening/Speaking abilities and good mechanical ability(icub robot for another example)
Who with what knowledge can make such company and lead a great engineering team of computer/electronics/mechanics/others?
|
I installed a command line tool which accepts genicam packets from a device as input into a docker image. If I use the option --network host in the run command for the container the packets are received. If I use -p 3956:3956/udp to expose the Genicam default port 3956 (e.g. stated here: RoboRealm - GenICam - Instructions) in the run command of the container I don't get packets. What did I wrong? Could the port be different from the default? How could I get the changed port then?
|
I bought a Roboclaw controller. Since then, have looked around for other. The software support seems to be the same (to me:sad) state.
They all seem to provide Arduino code examples, however, then I guess it's up to me to find / guess about the headers?
If I don't want to use python, and instead I want to use C / C++, how to best go about doing so for a Raspberry Pi?
It would just seem to me that if I spent maybe several hundred dollars on some hardware, it shouldn't be this hard to get some use out of it.
Am I missing something here?
|
I'm a newbie on ROS and I'm trying to figure out how ROS works so I'm installing ROS from source.
I've found that most of ROS packages contains two kinds of codes: C++ and Python. For example, here is the architecture of src of the ROS package actionlib:
src/
βββ actionlib
β βββ action_client.py
β βββ action_server.py
β βββ exceptions.py
β βββ goal_id_generator.py
β βββ handle_tracker_deleter.py
β βββ __init__.py
β βββ server_goal_handle.py
β βββ simple_action_client.py
β βββ simple_action_server.py
β βββ status_tracker.py
βββ connection_monitor.cpp
βββ goal_id_generator.cpp
I'm thinking if I can remove all of python scripts and only cmake && make the c++ files to use the ROS package actionlib?
|
Not sure if this is the proper community, apologies if not. I'm wanting to start a DIY project at home to automate my blinds through Alexa. I'm comfortable with the tech from the microcontroller up, but it's been quite a while since I've interfaced with any motors.
I'm leaning towards brushless motors, mostly because I'm somewhat familiar with them. I need something that can spin several rotations, something that I can keep track of position (or better, power required for rotation so that if it gets > some value, I know the blinds are fully shut or closed due to that resistance), and ultimately something somewhat energy efficient so I don't have to change the AA batteries every week.
Any suggestions to get me going would be greatly appreciated.
|
Consider a simple example of Bundle Adjustment where I have robot and landmark poses $x = \left[ x_p \text{ } x_m\right]^T$ and measurements given by $z$, such that a simple factor graph can be generated with the nodes containing poses and edges containing the measurements. I'll have to solve for a non-linear least squares problem of the form $C(x) = \frac{1}{2}|| r(x) ||^2$ where $r(x)$ denotes the residuals.
I can implement and use any non-linear least squares optimization algorithm such as Gauss-Newton or use a popular library like ceres-solver.
My question is: Now suppose out of the state variables $x = \left[ a \text{ } b\right]^T$, I need to marginalize some variables $b$, while keeping the rest $a$. How do I apply this in terms of Gauss-Newton Algorithm and ceres-solver?
I understand the Gauss-Newton Algorithm and Schur Complement.
If the original covariance of the system is
\begin{equation} K =
\begin{bmatrix}
A & C^T \\
C & D
\end{bmatrix}
\end{equation}
Original Information
\begin{equation} K^{-1} =
\begin{bmatrix}
\Lambda_{aa} & \Lambda_{ab} \\
\Lambda_{ba} & \Lambda_{bb}
\end{bmatrix}
\end{equation}
Marginalized covariance $K_m = [A]$ and marginalized information $K_m^{-1} = A^{-1}$ where is $A$ is computed by the schur complement $A^{-1} = \Lambda_{aa} - \Lambda_{ab}\Lambda_{bb}^{-1}\Lambda_{ba}$.
Now, what do I do when new poses and measurements are added to the system and optimization is done?
|
Is there any cheap way to measure how hard a servo is turning?
I am looking to measure changes of about 0.1 oz-in.
What are the common ways to measure servo torque, how much do they cost, and how precise are they?
Edit for clarification: I am looking to turn the servo at a constant speed, and measure how much the object it is turning is "fighting back."
|
I want to design an EKF to estimate the position of a UAV. If I were doing this with Euler angles then I would have a state vector that would look like
\begin{bmatrix}north&east&down&vel_x&vel_y&vel_z&a_x&a_y&a_z&y&p&r& \omega_x&\omega_y&\omega_z\end{bmatrix}
With velocities and accelerations being in body frame and exactly like I obtain from the sensors. These would be connected to the position states via the angles of roll, pitch, yaw.
However, now I have quaternions and I don't know how to form the dynamical system. Based on what I have read the state has to be:
\begin{bmatrix}north&east&down&vel_n&vel_e&vel_d&a_n&a_e&a_d&q_1&q_2&q_3&q_4& \omega_x&\omega_y&\omega_z\end{bmatrix}
The relationship between $ \dot{q} $ and $q$ is
$ \dot{q} = \frac{S(\omega)}{2} q$, where
$S(\omega) = \begin{bmatrix}
0 & -\omega_x & -\omega_y &-\omega_z\\
\omega_x & 0 & \omega_z &-\omega_y\\
\omega_y & -\omega_z & 0 &\omega_x\\
\omega_z & \omega_y &-\omega_x&0\\
\end{bmatrix}$
So the major difference is that I use the world frame for the velocities and accelerations. I presume this would require a preprocessing step using the orientation of the vehicle to transform the measurements for acceleration from body frame to world frame during the kalman correction step. The Kalman filter is given measurements in world frame which is not exactly what I get from the sensors so I imagine my measurement noise will not be populated using values I get from a spec sheet but I will rather have to measure those. The nonlinearity in the system comes only from the $S(\omega)$ function. Does this sound like I'm on the right track?
If anyone has some tutorial that shows exactly how to do this I would greatly appreciate it.
|
I have a picture on my C# software and the corresponding object in real world on 2d plane of motorized XY axis table. What I want is when I select a pixel on my picture, the motorized table should be moving to the exact same point of the real world object.
The real world object and its picture may have distortion and rotation differences and the picture on the software is not perfect too.
I use 4 points on each to map all the points in between, example:
Normal picture (x,y): [250, 25] , [250, 287] , [399, 287] , [400, 28] (in pixels)
Table coordinates (x,y): [0, 0] , [2098, 29538] , [19127, 28164] , [17097, -1200] (in microsteps)
I tried using OpenCV's homography:
I used FindHomography() to get the H matrix and I transform the picture point choosed using PerspectiveTransform() which give me the corresponding point in microsteps on the real world to move the motorized XY axis table.
OpenCvSharp.Mat hCv = OpenCvSharp.Cv2.FindHomography(srcPoints, dstPoints);
OpenCvSharp.Point2d[] resultCv = OpenCvSharp.Cv2.PerspectiveTransform(targetCv, hCv);
I also manualy calculated the homography matrix using this anwser : https://math.stackexchange.com/a/2619023
But I both cases I always get an error when transforming one of the four reference point like for [250, 25] the corresponding point should be [0, 0] but instead I get something like [-25, 245].
Question : I there a different way to link pictures coordinates to real world coordinates accuratly ?
Edit, more explanations:
To get my 8 points I select four points of the picture and then in real world I move my table to the four corresponding points manually. Let say I took a picture of my smartphone. I will get my points on the four edges of the phone. Then if I choose the X, Y pixel position corresponding to the front camera of my phone, my motorized table should move to get a landmark above the front camera of my real phone.
|
I have an old powered wheelchair motor from a Hoveround. I have removed the gearbox and I would like to attach a lawnmower blade directly to the shaft.
As you can see from the photo, it has 7 helical splines. Am I correct in assuming that I need a helical splined hub to mesh with it, which would either have a threaded shaft on the other side to hold the mower blade, or a method by which a threaded shaft can be connected?
If that's not the name of the part I need, what would it be? I've looked at many stores (McMaster, Servo City, etc) but can't seem to find something like this, which makes me think I am calling it the wrong thing.
FYI, the motor specs are:
Brand: Shihlin
Type: A9Y1X02872
Part #: M19004442
24V DC
Thanks for any help.
ServoCity
|
If two or more quadcopters are aligned, would the effect of drag due to the lead quadcopter's slipstream lead to measurable energy savings? And if so, how would one determine the optimal distance between the quadcopters?
|
I'm getting data From an IMU and a Magnetometer synchronously, my roll and pitch drifts are corrected with accelerometer inside IMU and now I have the DCM matrix during my sensor reading i also have the true heading from Magnetometer as a simple H degrees from north. what is the most straightforward way of correcting the my DCM with this H angle, i don't want to go through kalman filtering, just i want the formulation to simply compensate my yaw angles from this H value.
Based on the answered and comment i think my question was not clear so i try to clarify it more
more detail:
I have an IMU witch has an accelerometer and gyroscope that internally compensates for pitch and roll drift; this IMU not only provides me compensated pitch and roll angles as well as uncompensated yaw angle but also provides all raw data of accelerometer and gyroscope. by roll, pitch and yaw angle i mean three Euler angles as Ο, ΞΈ, Ο (witch are result of 3->2->1 Rotation).
now i have added a magnetometer sensor to my system. i read raw data from magnetometer and using atan2 i convert it to heading. now i want to use a simple complementary filter to compensate for yaw drift and i ask here for it's formulation. my main problem arises from the fact that the Ο (yaw) angle from the IMU is not directly only related to Heading. i mean for instance i cannot replace the yaw angle (Ο) with the heading value! so i want to know witch witch formulation i can simply relate this heading to yaw angle.
|
I am using a UR3 Robot with OnRobot RG2 Gripper. I have successfully connected the robot to my computer via Ethernet and i am able to send move commands and receive data successfully with Python. But the commands for RG2 Gripper provided in the manual are not working. I can send commands to the gripper via teaching pendant (by command RG2) but not through my computer via Ethernet. Has anyone had a solution to this problem ? Thanks in advance.
|
I have a ROS-based robot that runs on an Odroid with a 3d Camera (Openni-based) and would like to use Kinfu (or something similar) on it. I'm not sure if the Mali-GPU is fast enough, but so far I didn't find an implementation that told specifically that it was used on an Odroid.
Does someone know such an implementation?
|
With the Create 2 iRobot if it is already moving backwards (with its bump sensors in the front of it) how would I code it to sense that it is hitting something on its back side with only using the create's sensors?
|
I have the following set of coordinate frames (translations are not important in this case):
$w$, the reference frame.
$l$, a "left" reference frame. Rotated $-20\deg$ around $Y_l$ (axes $Y$ of frame $l$), rotated $-12\deg$ around $X_l$, and translated along $-X_w$.
$r$, a "right" reference frame, similar to left. Rotated $20\deg$ around $Y_r$, rotated $-12\deg$ around $X_r$, and translated along $X_w$.
I hope the diagram makes it more clear:
The rotation matrices that describe the orientations are ($R_{w,l}$ represents rotation from $w$ to $l$, values are rounded):
$R_{w,l} = \left[ \begin{array}{{c}}
0.94&0&-0.34\\
0.07&0.98&0.2\\
0.33&-0.21&0.92
\end{array}\right]$, $R_{w,r} = \left[ \begin{array}{{c}}
0.94&0&0.34\\
-0.07&0.98&0.2\\
-0.33&-0.21&0.92
\end{array}\right]$
The problem comes when I want to know the relative rotation between $l$ and $r$: $R_{l,r}$. If I am not mistaken, this rotation can be computed from the ones I have:
$R_{l,r} = R_{l,w}R_{w,r} = R^T_{w,l}R_{w,r}$.
When I do this, I get the following result:
$R_{l,r} = \left[ \begin{array}{{c}}
0.77&0&0.64\\
0&1&0\\
-0.64&0&0.77
\end{array}\right]$
Which in Euler angles to just a rotation of $40\deg$ around $Y$ (not sure which $Y$!). However, this does not make sense to me, because the only $Y$ I can use for this to make sense is $Y_w$.
What am I missing?
Expected result
I tried directly with the Euler angles. To convert from $l$ to $r$, one should do (starting in $l$):
$R_x(12\deg) \rightarrow R_y(20\deg) \rightarrow R_y(20\deg) \rightarrow R_x(-12\deg)$
Which are computed as matrices as follows:
$R_{l,r} = R_x(12\deg)R_y(40\deg)R_x(-12\deg)$
And this provides the following result:
$R_{l,r} = \left[ \begin{array}{{c}}
0.77&-0.13&0.63\\
0.13&0.99&0.05\\
-0.63&0.05&0.77
\end{array}\right]$
Which makes way more sense, and actually represents the rotation from $l$ to $r$.
|
I want to connect ir range finder (dlem 20) with the system running Linux kernel. It looks like they don't provide any kind of SDK or drivers for their products. So how should I get data from such sensors with UART connectivity?
|
Is the method to access all the sensors with RS-232/422, uart and usb sensors can be done in same way? Or by converting rs-232 to usb, uart to usb can they be programmed in same as usb from linux?
For example this link provides way to access such port from linux command line and program. so my question is considering i have another serial device/sensor with same type of connetivity i.e serial would it not be same to access that device in same method as described in above link.
|
I am applying Extended Kalman Filter for a mobile robot with IMU and odometry data. I am running simulation currently. However, I don't have suitable data for odo/IMU measurements to use. Where can I find those information?
|
I'm trying to obtain the dynamic model of a 3D robot (academic problem), I have obtained the D-H table, the transformation matrix for each pair of links and the total transformation matrix. Now I'm struggling with computing the total Kinetic energy (for using it in the Lagrangian approach).
In some texts, only the linear velocity is used in computing the kinetic energy, even if there are revolute links, that is:
$K=\frac{1}{2}m~v^2$
But in some others, both the linear and the angular velocities are considered :
$K=\frac{1}{2}m~v^2 + \frac{1}{2} I~\omega^2$
I'm a little bit confused with this, when and when not to use the angular contribution to the Kinetic energy?
|
I have a data set name Robot_odometery which consist of 3 field namely, Time(s),Forward velocity(m/s) and angular velocity(rad/s). I taken this dataset from http://asrl.utias.utoronto.ca/datasets/mrclam/index.html website. I want to calculate x,y,theta coordinate using this odometer dataset. As per the conversion from rectangular co-ordinate to polar ones I came to know two formula to calculate x and y. x=rcos(theta) and y=rsin(theta). I want to know that is that right approach to find out the Robot coordinate (x,y). or there is any thing else that can be used to find Robot coordinate from odometer data.
|
I am building a robot. I am going to use ONBO 6S 22000 mAh 25C to power all my controller boards and motors. It needs to power the following items:
1 motor driver (24Β V, 6Β A);
Microcontroller board (5Β V, 6Β A);
Vacuum Motor (12Β V, 20Β A).
Is there any power distribution board which can output different voltages from one battery source?
|
I am doing some project on Slam. I have a data set of a moving Robot which can give me the value of forward velocity(m/s) and angular velocity(rad/s) and time(s). Now if this data are provided I can find out the x,y,theta value of a Robot which I can plot and find the Robot path.
Now I am working on Graph Slam and little bit confused in this aspect. I know Graph slam also gives us path and map, but I can already determine the path using forward velocity and angular velocity then what is the need for Graph Slam?
|
Let's say we have two equations for a pose $\textbf{R},\textbf{t}$ optimization problem.
For ICP constraint
$$\textbf{e}{_{I}}=\textbf{n}^\top(\textbf{p}_d-(\textbf{R}\textbf{p}_s+\textbf{t}))$$
For epipolar constraint
$$\textbf{e}{_{c}}=\textbf{u}_s^\top[\textbf{t}]_{\times}\textbf{R}\textbf{u}_d$$
Then, for the optimization we define an objective function as follows and find $\textbf{R},\textbf{t}$ that minimize it.
$$\mathbf{J}=\alpha\frac{1}{2}\sum_{v=1}^{V}{\textbf{e}{_{I_v}}^\top\boldsymbol{\Sigma}_{I_v}^{-1}\textbf{e}{_{I_v}}}+\beta{\frac{1}{2}}\sum_{j=1}^{J}{\textbf{e}{_c{_j}}^\top\boldsymbol{\Sigma}_{c_j}^{-1}\textbf{e}{_{c_j}}}$$
My question is that what is the common practics to decide the weights $\alpha,\beta$. The scales of each constraint are too different. If we do not set it properly, the one with very small scale will be ignored. I have been finding these values manually, but I believe that there are better ways.
|
I bought two servos (HS-5685MH) for a pan-tilt-unit and somehow didn't see that they require 6.0 to 7.4 Volts which I don't have in my current setup. I have a 5V source that could provide the power, but I'm not sure what will happen to the motors if I use them with lower voltage.
Will they just be a bit slower or less powerful or could would I damage them with the lower voltage?
|
I am using FlySky i6 Transmitter and a FS-iA6B receiver.
I have it set up using PPM and BetaFlight recognizes the motors in the Receiver section. All the values adjust according to the toggles and joysticks.
For some reason, though, the motors won't turn. I am not sure if I am missing a setting or a vital step, but this is vital for the quadcopter to fly. :)
Any thoughts? Suggestions?
Side Note: I have set the PPM value on the transmitter to on.
|
What we are specifically interested in is for our quad-copter to be able to detect and maneuver to avoid (maneuverability is based on our custom algorithms) objects flying towards it with great speeds. A paintball gun projectile for example. Regarding the coverage of all possible directions, we are not concerned whether or not it will be achieved with one or many sensors. What we are interested in is if a sensor that can do that exists and is suitable to be mounted on-board a drone.
|
I am new here and with little experience in robotics.
I have assembled this holonomic robot:
https://shop.wickeddevice.com/product/omniwheel-robot-complete-kit/
It has 3 Omni-wheels, 3 gearhead motors. It is controlled by a pre-programmed Arduino 1 with a custom motor shield.
It is set up to be controlled with R/C inputs from standard radio control gears. I would like to replace the control via R/C inputs with Bluetooth ones. I would like to use the robot for research purposes and the R/C does do allow me the flexibility to have pre-coded functions to input in the computer. I would like to use the BlueFruit LE that comes with an IOS app.
https://learn.adafruit.com/bluefruit-le-connect-for-ios/controller.
The current set-up comes with a Motor Shield Library + Arduino Code, see links:
https://github.com/WickedDevice/WickedMotorShield
https://github.com/WickedDevice/OmniWheelControl/blob/master/OmniWheelControl.ino
My questions are: would you advise to use the current setup with Bluetooth? Would you advise to use another motor shield?
Thanks
|
I have seen the usage of the term "omnidirectional" in robotics, but I have not found a (precise) definition.
In the chapter Omni-Directional Robots from the book "Embedded Robotics", it is stated:
In contrast, a holonomic or omnidirectional robot is capable of driving in any direction
which seems to indicate that these words are synonyms.
What's an omnidirectional robot? What is the difference between a holonomic and an omnidirectional robot (if any)? What are examples of omnidirectional and holonomic robots (or other objects)?
|
I have a trajectory of a quadcopter in 3D. Out of all the poses in the trajectory, I want to correct one of the poses. I then want to adapt the rest of the poses based on the newly corrected pose. I understand that this is a common problem in robotics and there would be a number of solutions. I want a tried and tested c++ library that does this as I want to integrate this feature in my system but not spend time on building it myself. What are the possible c++ libraries I can use?
|
I wanted to know the actual mathematics behind the path planners MoveIt! uses for manipulators from OMPL. I tried to look into source codes but couldn't get enough details.
I wish to know:
How cost function is implemented, i.e., how path cost is calculated in configuration space. It can't be Euclidean distance I guess?! So what is it.
How Sampling is done. A sampler is called in the src files but i couldn't get the details. Is it done in configuration space or workspace or both can be done?
What is the exact pipeline? Like after sampling(depending on state space), is inverse kinematics implemented to get into configuration space or foward-kinematics is implemented to get into work space and such details. Which is the better option?
Actually I wish to implement my own algorithm (like some variation of RRT) without MoveIt!/OMPL hence it is important for me to know all the details.
I am really confused about this. Any explanations or links where I can find the details and understand them would be really helpful.
|
After having read the introductions of the Wikipedia articles about "sonar" and "ultrasonic sensor", it is still not clear to me what the differences between the two are. In the context of robotics, is there any difference between these two sensors (terms)?
|
One of my four ESCs burnt out on a first test flight of a new quad build. I plugged the battery in, armed the drone, and one of the motors spun for less than a second before stopping. I immediately saw white smoke and smelled burning plastic. This all happened with the throttle set to idle. The other three motors are working fine even under load.
I opened the ESC to take a look, and sure enough, one of the FETs shows signs of burn damage. This same motor was operating correctly and responsive on the test bench, without a prop. I also noticed that this ESC gets very hot to the touch - 80C maybe - even when disarmed (but powered).
What could be the cause? Is a bad ESC likely? I'm wondering if it's a short, but the fact that it was responsive on the bench makes it seem unlikely.
30A Simonk ESC
920KV Motors
4s LiPo
Matek PDB
8" props
|
I am working on mobile Robotics. I want to implement Slam. I have a very specific question about EKF Slam. To implement Slam I read the book "Probabilistic Robotics". I read its online version. There is a very well defined algorithm about EKF SLAM of known correspondence. But after read this algorithm again and again still fail to understand this algorithm, Especially where Landmarks portion is introduce. They are using range and bearing information to find out the landmarks (x,y) co-ordinates. This portion is little fishy for me. The value they get it is in Robot body Frame because the camera in attached with the Robot. So how could I get it Global Frame. I don't think there is any step in algorithm which transfer it to Global Frame. Any suggestion and discussion about this matter is most welcome.
Thank you in advance.
|
If you define a wheel that can only rotate and move in the direction it is pointing to but using a combination of motions (arbitrarily short) it can be moved sideways, does the system still remain holonomic, by definition?
|
I am currently working on the 3d model of a simple robot. However I have started to realize that some of the things/mechanisms might not work as I imagined.
So is there any way that I can import the obj file of this model somewhere, set things as rigid bodies and add set axles to rotate continuously and then see if the motion gets transferred to the desired places and things like that? I am looking for something simple.
I really apologize for this vague question but I have really no knowledge of mechanical engineering and don't understand many terms.
|
I am using a Raspberry Pi 3, B+ pertinent libraries are WiringPi, with WiringSerial. Code is written in c.
EDIT: the Serial connection is handled through GPIO 14(Tx) & 15(Rx), 5v (Tx) is stepped down through 165 ohm resistor and, a 3.3v Zener Diode.
The following code is what I use to retrieve and and convert Raw Encoder Count to 16bit value.
int16_t sensors_left_encoder_init() {
int16_t left_encoder_start = 0;
char buffer_l[2];
int i = 0;
serialFlush(fd);
serialPutchar(fd, 142); // Request sensor packet
serialPutchar(fd, 43); // Left encoder request
do {
buffer_l[i] = serialGetchar(fd);
printf(" -> Left Serial Value: %d\n", buffer_l[i]);
i++;
} while (serialDataAvail(fd));
left_encoder_start = (int16_t)(((buffer_l[0]) & 0xFF) << 8 | (buffer_l[1]) & 0xFF);
printf(" -> Left Encoder Start: %" PRId16 "\n", left_encoder_start);
return left_encoder_start;
}
int16_t sensors_right_encoder_init() {
int16_t right_encoder_start = 0;
unsigned char buffer_r[2];
int i = 0;
serialFlush(fd);
serialPutchar(fd, 142); // Request sensor packet
serialPutchar(fd, 44); // Left encoder request
do {
buffer_r[i] = serialGetchar(fd);
printf(" -> Right Serial Value: %d\n", buffer_r[i]);
i++;
} while (serialDataAvail(fd));
right_encoder_start = (int16_t)(((buffer_r[0]) & 0xFF) << 8 | (buffer_r[1]) & 0xFF);
printf(" -> Right Encoder Start: %" PRId16 "\n", right_encoder_start);
return right_encoder_start;
}
I pulled the battery, ran the code and the first run produced:
-> Left Serial Value: 0
-> Left Serial Value: 1
-> Left Encoder Start: 1
-> Right Serial Value: 0
-> Right Serial Value: 2
-> Right Encoder Start: 2
However the second run through produced this:
-> Left Serial Value: 0
-> Left Serial Value: 3
-> Left Encoder Start: 3
-> Right Serial Value: 0
-> Right Serial Value: 4
-> Right Encoder Start: 4
I have a feeling its how serialGetchar() is receiving the serial output from the interface, but I am not 100%.
From reading the wheel encoder topics on here, and the OI Specs, I understand that the raw values should not change unless a drive command was issued to the interface.
|
In Udacity's Self-Driving Car Engineer online course, one instructor claims that "[Finite State Machines are] not necessarily the most common [approach for behavior planning] anymore, for reasons we will discuss later." I suppose the later assertions that FSM's are easily abused and become harder to understand and maintain as the state set increases amount for the "reasons", but we're never told what's used in its place.
I have been using FSM's for years, mostly for parsing-related problems in enterprise systems programming, but more recently for behavior planning problems. In my experience they're great for breaking down complex problems into manageable parts, not only when modelling but also in implementation β I usually implement each state as a separate class or function, allowing me to concentrate on the respective behavior and transition rules in relative isolation to the rest of the architecture.
That's why I find it frustrating that the lesson failed to discuss what is currently used instead of FSM's for behavior modelling: if a better approach exists for keeping track of a system's context and adjusting its behavior in response, I'm yet to hear of it, and I'd very much like to.
|
https://github.com/udacity/RoboND-Kinematics-Project
We are using the above mentioned KUKA arm model to simulate pick and place. We noticed that KUKA arm would not pick objects heavier than 2.5 kg in simulation. We are not sure if this is a simulation limitation or limitation of the gripper (KR210 claw).
Where can we find the specifications of KR210 claw? If we can get KUKA arm to carry heavier payload in simulation, where do we do that parameter tweaking?
Are there any other KUKA arm compatible grippers with DAE/STL files available, which has a higher payload capability?
Does the weight of the object to be picked, influence the trajectory/path planning by OMPL?
|
I'm very new in this field so my question may be very stupid. I apologize for this at first.
I'm trying to use a proper camera to run a SLAM algorithm. As suggested by some people, stereo camera may be a good choice. When I searched online, I found terms "stereo camera", "depth camera", "3d camera", and "rgb-d camera", and I feel confused. My understanding is "rgb-d camera" is more than "depth camera" because it has rgb channels while simple "depth camera" only makes white-black images. However, I have no idea about the difference from "stereo camera" and "3d camera". I know some camera options such as ZED, BumbleBee, Kinect, Intel RealSense, to me they are all "stereo camera".....
Anyone can give me some instructions? Thanks a lot in advance!
|
I want to publish via terminal a particular message format - sensor_msgs/Joy. However, every format I use seems to be wrong and I cannot understand how it is meant to be written on the terminal. On the internet all I can find is the generic structure and examples of similar messages but modifying my message based on that does not seem to produce any result. Say I want to publish that the object moves forward along a single axis while no motion in others. How do I achieve that with the proper syntax.
Also if you can let me know a place where I can find examples of the exact way I have to publish certain messages via terminal using rostopic pub.
|
I am studying about various filtering techniques for Robot pose esitimation. I came to know two very well known filter technique Kalman Filter and Extended Kalman Filter. To know about this two technique I read Probalistic Robotics books.
Kalman Filter is a technique for filtering and prediction in linear system.
Extended Kalman Filter is a technique for filtering and prediction of non-linear system.
This two definitions are not very clear to me. So I googled the term linear and non-linear system
Linear System- A linear system is a mathematical model of a system based on the use of a linear operator.
Non-Linear System: In mathematics and physical sciences, a non-linear system is a system in which the change of the output is not proportional to the change of the input.
Now both definitions are very fishy for me. They are the well structured definition but not very efficient for mobile robot.
Can any one guide me about linear and nonlinear system from scratch or from naive level. I want to know this term in concept of Mobile Robotics and Kalman filter and extended Kalman filter. Any practical example is most welcome.
|
I read Probabilistic Robotics by Sebastian THRUN (online version). I also read http://ais.informatik.uni-freiburg.de/teaching/ws12/mapping/pdf/slam04-ekf-slam.pdf
Question1 :
What will be the dimension of Matrix $F(x,j)$ below? As per book its dimension is $(3N+3)*5$. So if I have 10 landmark the dimension will be $33*5$. Meaning 33 Rows and 5 column. But as per picture No: 1 $F(x,j)$ has 6 Rows. This is confusing for me.
Picture No1:
Question 2:
Both references describe the same approach Extended Kalman filter slam with known correspondence, the only difference between them is the equation to build the matrix $H_i$ (see below, picture 2 and Picture 3).
Now after reading those books I am confused which one is error less?
Picture No2:
Picture No3:
Helpful discussion appreciated.
|
Main Question:
note: The code in this question is pseudo-code, I'm using Python, but my pseudo-code is a mix of Python and C++.
I have a camera on the top of my robot's "head" which has pan and tilt capabilities as follows:
move_head(int pan_degrees, int tilt_degrees);
For example, when I call move_head(30,-20) the robot's head will move 30 degrees to the right and 20 degrees down, from the current position.
Importantly: When a current movement command is underway, the next available command waits for it to complete before executing.
The camera is tracking a moving object (usually a person, but for this demo I'm just tracking a Red Ball for simplicity) and must tell the head which direction to move to keep the point as centered on the screen as possible. I get the object's current central point (centroid) as a pair of integers:
int x; //pan
int y; //tilt
I also have the center point as a pair of constant integers:
const int x_center;
const int y_center;
As you may have noticed, the function above does not have velocity controls in its parameter list and so the head moves only at pre-defined speeds (not slow, but I'm trying to remove jerky behavior). There is also a "full stop" function, but because of my desire to have no jerky behavior I'm hoping not to need to use it.
My approach so far has been to define thresholds across my input image, such as:
int xthreshold = 600; // pixels
When the point crosses a threshold, I would calculate the difference in position:
if(x > xthreshold):
int new_x = x - x_center; //new x
int new_y = y - y_center; //new y
move_head( new_x, new_y); //move to new position
However, this is an extremely primitive algorithm. I imagine I could use a statistical method, or the current velocity of the object at the time of measurement, to predict a slightly more accurate new point than simply using the current difference from the center.
Addendum:
My code base is large and complex and I'm not personally familiar with the many layers of classes involved in the definition of my move_head function. However, if anyone knows of an amazing algorithm that does what I'm talking about that uses additional parameters, I would really love to hear about it.
My guess is that a definition such as:
move_head(int pan_degrees, int pan_velocity, int tilt_degrees, int tilt_velocity);
or
move_head(int pan_velocity, int tilt_velocity);
or
move_head(int pan_acceleration, int tilt_acceleration);
may have a more intuitive implementation of an effective point tracking algorithm.
Additionally, would it be more useful to have my function update the desired destination mid-movement instead of blocking until movement is complete?
|
Why would iRobot provide the serial port and Open Interface serial protocol in their residential models? I understand making their Create2 product STEM friendly. But why spend the extra cost for the same hardware/software in the residential models? I mean, I am glad they do! I am just curious about the business model.
|
I have been trying to build a robotic arm using a PCA9685 servo controller, a 6-axis robotic arm, and a raspberry pi model b+.
I wired them all together accordingly but I have no idea how to actually control the arm through the raspberry pi.
I know python but don't know the instructions to move the arm. can anyone help with this?
thank you
|
I feel this is the most appropriate exchange for this question I hope someone could bring some insight into an area I know very little about.
I would like to find a way to detect and log the time which an object enters and leaves a specific area.
The area would need to be defined in a 3 dimensional way and the object would be made specifically for the purpose of being detected.
Thanks for any help.
|
I am working on an Autonomous underwater vehicle (AUV), and I do not know how to localize an acoustic source underwater
I know that I will use an array of hydrophones but how can actually get the position of the acoustic source
-are there any available commercial chips for acoustic signal processing?
-are there any available books discussing that topic technically?
-is there any known strategy for underwater acoustic source localization?
|
I am currently googling on this topic but I would like to hear your opinion.
What is the best self-contained SE3 library that offers interconversion between quaternion, rotation vector, transformation matrix?
Those are the functions that I often use for my projects and all implemented in Matlab. And I am looking for an alternative to avoid re-implementing them all in python.
Quaternion2RotVector and its inverse
Quaternion2RotMatrix and its inverse
RotMatrix2RotVector and its inverse
SE3 inverse, the inverse function of each rotation representation
|
I'm building a multi-agent system, and I heard about multi-robot systems.
Can anyone explain the differences between multi-agent and multi-robot systems, and when to use one or the other?
|
I'm planning to create a Raspberry Pi robot to map a room. I intend to use a particle filtering algorithm as one of the central points of the project, that will allow me to work out my position relative to a 'beacon'. This beacon will be placed near the centre of the room and the robot needs to be able to work out the distance from itself to the beacon, accurate to within <1 metre. The direction to the beacon doesn't matter.
I'm struggling to work out what technology I should use for the beacon. It could be powered by a portable battery, and it needs to work fairly well through table/chair legs (I don't expect it to work through a wall though).
I've considered:
Infra-Red
Ultrasound
Ultra-Wide-Band
Wifi networks
Bluetooth
Image analysis to determine range from known object (won't work through table/chairs)
Is there a way to determine the distance from the robot to a fixed point? If so, what is the best way?
This question is not a duplicate of this question, even though they look similar. I want a distance to an object, the other question is about tracking movement.
|
I'm building a quadcopter and I'm finding that certain electronic boards are heating up beyond what I believe the heat should be.
In this case I hooked up the video transmitter and then tested it. The board gets extremely hot. Is this normal?
I also noticed the electronic boards for the motors seem excessively hot as well.
Everything else seems to be either cool or warm to the touch, but nothing like the two afore mentioned electronic boards.
|
I am in the starting phase of building a fairly large octo-copter drone. I am currently thinking about the possibility of powering my octo-copter design using 4 separate batteries where each battery powers 2 motors each, in order to get around the problem of the large current that would pass through the single central line (causing heat and voltage drop etc.) if I where to connect all batteries in parallel.
For sure there would always be a slight voltage difference between each battery and if you keep flying for too long you are gonna get one battery that runs out before the others, but I am theorizing that it shouldn't really matter as long as I make sure never to completely exhaust any of the batteries. The flight controller should compensate for the lower voltage provided to each engine. That is my theory at least. I saw a video of these guys doing the Hobbyking Beer-lift challenge with a really large drone, and they seemed to be using something that seems to be 4 separate circuits, one for each motor. Has anybody done something similar to this, or have any experiences about if it's a good idea or not?
|
One of the simple tricks in getting high performance for a global 3D point cloud registration is combining "3D feature matching + ICP". For example FPFH matching + RANSAC for initial pose estimation and ICP for pose refinement.
Does anyone know a good reference paper for this method? I am referring to this method in my paper but don't know who first proposed it.
|
The intended purpose of the arm is to pick and place chess pieces.
I am planing to make a 3 DoFs system (only using 2 joints mounted on a revolving turntable), the link structure should be composed of rectangular pieces of acrylic sheet. My chessboard is 28cm x 28cm.
I don't know how to calculate the length of the 2 links so that the robotic arm end-effector can reach each and every square.
|
I am working on EKF-MonoSlam (Davson) method for the mere input as a webcam, (it works "fine" if we calibrate and use reasonable sigmas) , and I am trying to add the odometry of some dataset (as rosbag) as in the reference (https://github.com/rrg-polito/mono-slam)
so the odometry (estimated pose) is added so $V_k$ is to make $r_k$ right and so $\Omega_k$ (I know I could use velocity , but I need this working first) , that is supposed to make the camera in the right track as additional measurement, but what happens is:
the camera refuse many matches (as the projection of it is wrong),and soon drift and get lost.
Why is that? and How to fix it?
the dataset site says that odometry data is not calibrated , so I am thinking that it maybe need some scale parameters to make the image estimate of pose match with odometry estimates, if so, how to find that scale
any help or reference ,would be appreciated ,Thank you
Edit:
now I am using velocity of odometry as input and it's better (still there's errors), any link to code or explanation of adding odometrey (as control not measurement) to compare how it goes would be appreciated , this is really a headache in c++...
|
I've been trying to run robot_state_publisher and joint_state_publisher on a remote TX1 running Ubuntu 16.04 and ROS Kinetic. When running the node from a local launch file (manually ssh into the remote machine and run launch file), everything runs fine. However, when I try to run a launch file from my local machine (launches nodes from remote machine using the < machine > tag), I get this error in my terminal:
[192.168.1.237-0]: [robot_state_publisher-28] process has died [pid
26798, exit code 255, cmd
/opt/ros/kinetic/lib/robot_state_publisher/state_publisher
__name:=robot_state_publisher log:=/home/nvidia/.ros/log/038bfbd8-7afb-11e8-b9f2-ac220b57ae89/robot_state_publisher-28.log].
log file:
/home/nvidia/.ros/log/038bfbd8-7afb-11e8-b9f2-ac220b57ae89/robot_state_publisher-28*.log
[192.168.1.237-0]: [joint_state_publisher-27] process has died [pid
26780, exit code 1, cmd
/opt/ros/kinetic/lib/joint_state_publisher/joint_state_publisher
__name:=joint_state_publisher log:=/home/nvidia/.ros/log/038bfbd8-7afb-11e8-b9f2-ac220b57ae89/joint_state_publisher-27.log].
log file:
/home/nvidia/.ros/log/038bfbd8-7afb-11e8-b9f2-ac220b57ae89/joint_state_publisher-27*.log
How can I fix this? Here's my launch file:
<launch>
<!-- ROS parameters -->
<!-- remote machine (wheatley) -->
<group>
<machine name="wheatley" address="192.168.1.237" env-loader="/home/nvidia/catkin_ws/devel/env.sh" user="nvidia" password="nvidia" default="true" />
<include file="freenect.launch"/>
<arg name="depth_registration" value="true" />
<arg name="publish_tf" value="false" />
<include file="mpu_9250.launch" />
<arg name="model" default="wheatley.urdf"/>
<param name="robot_description" command="$(find xacro)/xacro --inorder $(arg model)"/>
<node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" />
<param name="use_gui" value="false" />
<node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" />
<param name="use_gui" value="false" />
<!-- IMU frame: just over the RGB camera -->
<!--node pkg="tf" type="static_transform_publisher" name="rgb_to_imu_tf" args="0 0.0 0 0.0 0.0 0.0 /sensor_link /imu_link 50" /-->
<arg name="pi/2" value="1.5707963267948966" />
<arg name="optical_rotate" value="0 0 0 0 0 0" />
<node pkg="tf" type="static_transform_publisher" name="optical_rotation" args="$(arg optical_rotate) /sensor_link /camera_link 50" />
<include file="rtabmap.launch">
<arg name="rtabmap_args" value="--delete_db_on_start"/>
</include>
</group>
</launch>
Here's the error traceback on the robot:
Traceback (most recent call last):
File "/home/nvidia/catkin_ws/src/joint_state_publisher/joint_state_publisher/joint_state_publisher", line 474, in <module>
jsp = JointStatePublisher()
File "/home/nvidia/catkin_ws/src/joint_state_publisher/joint_state_publisher/joint_state_publisher", line 149, in __init__
robot = xml.dom.minidom.parseString(description)
File "/usr/lib/python2.7/xml/dom/minidom.py", line 1928, in parseString
return expatbuilder.parseString(string)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 940, in parseString
return builder.parseString(string)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 223, in parseString
parser.Parse(string, True)
TypeError: Parse() argument 1 must be string or read-only buffer, not None
**EDIT: ** Here's the error from robot-state-publisher:
[ERROR] [1530379066.805335879]: Could not find parameter robot_description on parameter server
|
I have built an InMoov Robot. The tablet I got for his controller is too slow. I am currently using a laptop to control him across a usb cable. I want to get rid of the usb cable.
I was thinking of using the embedded tablet as one end of the link (slave), and the laptop as the other. That should give me the speed of the laptop without the usb Cable.
Can this be done? How?
|
I need help with a robotisation project, please.
I am new to robotics and have the Maverick RC car pictured below:
My project is to add Arduino and Raspberry Pi boards to control the speed and direction of the car. So, the hand held control was thrown out.
I started by hooking up my Arduino to the front servo marked 9 in the diagram and I can successfully move the wheels to specific angles by using the examples like servo.write(180);.
My problem is the drive wheels. I have no idea how to power the wheels from the LiPo battery and use the Arduino to control the speed.
I'm open to and would welcome any help, please.
UPDATE:
After further consultation of the RC cars' specifications, the rear drive is controlled by a MM-25 brushed motor.
My problem is how do I interface with the MM-25 using my Arduino as it's wired into the RC car?
So far, I have connected my Arduino to the brushless motor by the same connection it uses to connect to the Transmitter/Receiver. I'm running the same servo code which worked for turning the servo(front) wheels but I get no response from the rear wheels.
My code is below:
#include <Servo.h>
const int servo_pin = 9;
Servo servo;
void setup()
{
Serial.begin(9600);
servo.attach(servo_pin);
}
void loop()
{
servo.write(180);
delay(1000);
servo.write(-180);
delay(1000);
/*servo.write(-180);
delay(1000);*/
/*servo.write(180);
delay(1000);*/
/*for(int pos=0; pos <= 180; pos+=25){
servo.write(pos);
Serial.print("Moving to: ");
Serial.println(pos);
delay(1000);
}*/
}
|
There are multiple ways to update new pose in iterative pose optimization problem. The easiest one we often find in papers in robotics is as
SO3 + translation
Update on right
\begin{equation}
\begin{split}
\textbf {R}' &= \textbf{R} \textbf{e}^{[\boldsymbol{\omega}]
_\times}, \textbf {t}' = \textbf{t} + \delta\textbf{t}
\end{split}
\end{equation}
Update on left
\begin{equation}
\begin{split}
\textbf {R}' &= \textbf{e}^{[\boldsymbol{\omega}]
_\times}\textbf{R} , \textbf {t}' = \textbf{t} + \delta\textbf{t}
\end{split}
\end{equation}
where $\delta\textbf{t}\in R^3, \boldsymbol{\omega}\in R^3$ are estimated update on the pose with $\textbf{R}\in SO(3),\textbf{t}\in R^2$.
But there are other ways as well, such as
SE3 Matrix update
Right side update $\textbf {T}'=\delta\textbf{T}\textbf{T}$
\begin{equation}
\begin{split}
\textbf {T}' &=\delta\textbf{T} \textbf{T}=
\begin{bmatrix}
\textbf{e}^{[\boldsymbol{\omega}]
_\times}&\delta\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
\begin{bmatrix}
\textbf{R}&\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
=\begin{bmatrix}
\textbf{e}^{[\boldsymbol{\omega}]
_\times}\textbf{R}&\textbf{} \delta\textbf{t}+\textbf{e}^{[\boldsymbol{\omega}]
_\times}\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
\end{split}
\label{eq:disp}
\end{equation}
\begin{equation}
\begin{split}
\textbf {T}' &= \textbf{e}^{[\boldsymbol{\xi}]_\times} \textbf{T}=
\begin{bmatrix}
\textbf{e}^{[\boldsymbol{\omega}]
_\times}&\textbf{V} \delta\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
\begin{bmatrix}
\textbf{R}&\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
=\begin{bmatrix}
\textbf{e}^{[\boldsymbol{\omega}]
_\times}\textbf{R}&\textbf{V} \delta\textbf{t}+\textbf{e}^{[\boldsymbol{\omega}]
_\times}\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
\end{split}
\end{equation}
Left side update $\textbf {T}'=\textbf{T}\delta\textbf{T}$
\begin{equation}
\begin{split}
\textbf {T}' &= \textbf{T}\delta \textbf{T}=\begin{bmatrix}
\textbf{R}&\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
\begin{bmatrix}
\textbf{e}^{[\boldsymbol{\omega}]
_\times}&\delta\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
=\begin{bmatrix}
\textbf{R}\textbf{e}^{[\boldsymbol{\omega}]
_\times}&\textbf{R}\delta\textbf{t}+\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\textbf {T}' &= \textbf{T}\textbf{e}^{[\boldsymbol{\xi}]
_\times}=\begin{bmatrix}
\textbf{R}&\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
\begin{bmatrix}
\textbf{e}^{[\boldsymbol{\omega}]
_\times}&\textbf{V} \delta\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
=\begin{bmatrix}
\textbf{R}\textbf{e}^{[\boldsymbol{\omega}]
_\times}&\textbf{R}\textbf{V} \delta\textbf{t}+\textbf{t} \\
\textbf{0}&1
\end{bmatrix}
\end{split}
\end{equation}
se3 6x1 pose update($\boldsymbol{\mathfrak{I}}^{-1}$ could be left or right jacobian)
\begin{equation}
\boldsymbol{\xi}'=\boldsymbol{\xi} + \boldsymbol{\mathfrak{I}}^{-1}\boldsymbol{\xi}
\end{equation}
My question is that which one is most stable or prefered.
If you know any reference toward this problem, please let me know.
Thanks!
|
I am trying to design a robotic hand that is able to adaptively grasp unfamiliar objects. I am initially thinking that using OpenCV for pose estimation might make doing so easier.
Assuming that I am able to successfully estimate the pose of an object, how might I go about finding the initial contact points for each finger on the object? (or how does each finger know where to go in order to grasp the object?)
|
I've read this link: https://answers.ros.org/question/217475/cmakeliststxt-vs-packagexml/
But still, I can't understand very clearly.
When I try to compile ROS project with the command: catkin_make --install, how and when is the package.xml used? how and when is the CMakeLists.txt used? Both of them must be used?
|
I am looking for the specification of UR3, peak torque and rpm.
|
The problem
I want to be able to move the direction a laser is pointing.
This means I need to have motorised rotation of the laser head in the x direction and in the y direction. I am unsure of the best way to do this.
Currently 6 axis motorised stages are available from Thorlabs but cost on the order of Β£7000. This is also overkill for what I need.
Long explanation
I'm a physicist and I'm building an experimental set up. I work on nanoparticles that have implications for optoelectronics & solar cells.
I'm currently making an automated system to test samples which are roughly 15cm by 15cm (and have the nanoparticles on them). I need to scan the laser across the whole sample, taking measurements at specific coordinates on the sample.
The laser is coupled to an optical fibre - we can assume the light is collimated. How best do I raster the laser light around the sample?
Is there some sort of computer controlled jig which might be able do this, are there better solutions?
Hopefully the setup can be entirely robotic - i.e. I place the the sample in, and all the measurements are automated - I'm struggling with automating the laser direction.
Edit:
Some further details
The laser must stay fixed in position i.e. it is only allowed to pan
and tilt. i,e a simple x y stage of the same dimensions of the sample would not be appropriate.
Almost any amount of jitter is allowed when the laser moves.
The speed of operation is not important.
The laser does NOT need to remain perpendicular. All that matters is the spot hits a rough area (say around 2-3 cm by 2-3 cm).
|
I want to write a Matlab function which takes the DH parameters as input and outputs a 4X4 transformation matrix.
The code I have written is :
function [A] = compute_dh_matrix(r, alpha, d, theta)
A = eye(4);
% ROTATION FOR X
A(1,1) = cos((theta));
A(2,1) = sin((theta));
% ROTATION FOR Y
A(1,2) = -(sin(theta))*(cos(alpha))*(1);
A(2,2) = (cos(theta))*(cos(alpha));
A(3,2) = (sin(alpha));
% ROTATION FOR Z
A(1,3) = (sin(theta))*(sin(alpha));
A(2,3) = -(cos(theta))*(sin(alpha))*(1);
A(3,3) = (cos(alpha));
% TRANSLATION VECTOR
A(1,4) = (alpha)*(cos(theta));
A(2,4) = (alpha)*(sin(theta));
A(3,4) = d;
end
But when I submit the code for evaluation in an online platform it prompts that variable A has incorrect value.
One of the input data used for evaluation of the code is :
r = 5;
alpha = 0;
d = 3;
theta = pi/2;
The matrix representation I have used to write the code is :
$A =$ \begin{bmatrix}cos(\theta)&&-\sin(\theta)\cos(\alpha)&&\sin(\theta)\sin(\alpha)&&\alpha \cos(\theta)\\\sin(\theta)&&\cos(\theta)\cos(\alpha)&&-\cos(\theta)\sin(\alpha)&&\alpha \sin(\theta)\\0&&\sin(\alpha)&&\cos(\alpha)&&d\\0&&0&&0&&1\end{bmatrix}
A is the transformation matrix generated using DH parameters $\theta$, $\alpha$ and d
$\alpha$ is angle about common normal from old z axis to new z axis
$\theta$ is angle about previous z axis, from old x axis to new x axis
d is offset along previous z axis to the common normal
|
I am trying to implement Graph Slam. For tutorial purpose I read the paper "The GraphSlam Algorithm with Application to Large-Scale Mapping of Urban Structures" by Sebastian Thrun. At the beginning of this paper in abstract portion there is a line which arise doubt. "It then reduces this
graph using variable elimination techniques, arriving at a lowerdimensional
problems that is then solved using conventional optimization
techniques". What is that variable elimination technique? Which variable they want to eliminate?
In the same paper "Table 3. Algorithm for Reducing the Size of the Information Representation of the Posterior in GraphSLAM". What is the meaning of this algorithm? What kind of data we calculate here? Paper Link The GraphSLAM Algorithm
I am eagerly looking for the answer of those question because after read this paper it is very difficult to understand the theory of GraphSlam.
|
I am trying to use a PID loop to control a motor. This is my first attempt at creating a PID loop (really just a PI loop). There doesn't exist a model for this system, and I don't know how to create one. All that I mention below may be an incorrect approach, so please guide me on the correct one.
My goal, is to set a percentage velocity, and have the motor run at the velocity.
My only feedback is degrees of movement, which I have correlated to an acceptable input -> output ratio. Meaning, without any resistance, if I have moved 50 degrees in 250mS that is considered 100% velocity. 25 degrees = 50% etc.
As mentioned in another thread here, I've experimented with setting waypoints for the PID controller. That is, if I were moving at 100% velocity, I set a waypoint that is 50 degrees away from the last one, every 250mS. This appears to be working fine under non problematic angles.
The problem is, if I am at a lower speed, such as the 50% velocity. There are certain angles where the proportional gain is not enough, and the motor is stuck. I've tried experimenting with how I modify the waypoints, and modifying Ki and Kp. I could enumerate them here, though I don't think it's worth it, as I think I have some fundamental misunderstanding on how to do this.
So to give an overview of my current structure in pseudo code:
while (1)
{
if firstRun or 250ms elapsed
targetPosition = currentPosition + scaledVelocityValue
if 10mS elapsed
calculatePID()
}
In this case, the currentPosition is not be the previous targetPosition, and therefore the speed is always constant. I've tried using the previous targetPosition, but that creates an unstable system because it's such a large increase.
I hope this makes some sense. Some guidance please.
|
I'm trying to figure out how to make a remote control lawn mower, and I'm stuck questioning one critical part
I'm in the process of buying 2 wheelchair motors that accept both 12 and 24volts which are 60 and 120 rpms which will be wired into a motor controller that also powers a receiver for the transmitter
The question I have is the motor controller, does it have to be a dual motor controller or can I pass with just a general motor controller capable of handling 12volts and can a 12volt car battery be ran to the motor controller or would it need to be regulated down to a lower voltage, I've seen videos on people using the sabre tooth dual motor controller, I'm just wondering why their all limited to one controller
|
I am trying to implement GraphSlam from this paper. I have some doubt regarding the algorithm described in this paper at table2: Algorithm GraphSLAM_linearize.
I have a doubt in line 7. As per my knowledge, Gt is a 3*3 matrix. How could I subtract 1 from a 3*3 matrix? As per matrix subtraction rule both matrices should be in same dimension.
|
I am building a robotic insect leg that would be actuated by 2 DC motors for pan and tilt motions. Total weight of the leg is about 1.5kg and 1m long (moment arm ~0.5m). Magnetic encoder would be used to determine position.
This could be a very basic question but I have been searching for information regarding the wiring, control and suitable motor drivers to control the DC motors using Arduino to no avail. So any help would be greatly appreciated!
The two motors that I am using are Servocity's DC gearmotors that come with magnetic encoders:
1) https://www.servocity.com/23-rpm-hd-premium-planetary-gear-motor-w-encoder
Will be used for tilt motion
2) https://www.servocity.com/84-rpm-hd-premium-planetary-gear-motor-w-encoder
Will be used for pan motion
The commonly used motor driver L298N supports peak currents up to 2A but both these motors have stall current of 20A. The torque/ speed curve is provided but no additional information about the torque/ current curve is available so not able to straightforwardly determine the max current required when running at lower torque requirements.
What would be a suitable motor drivers/ H-bridge that can be used with these two motors? Have already read up the few motor driver posts but doesn't seem to have that talks about max current of 20A.
Are there wiring diagrams available for these type of gearmotors? The most relevant topic found is this, but I have trouble completely understanding it: Need super-basic help with motor encoder
Thanks!
|
We are using Ethernet as our current control bus and are considering switching to CAN. Our main computing unit is a PC under Ubuntu which needs to be fitted with a CAN port.
Anyone can recommend a PC to CAN hardware interface?
Any insight and experience feedback on using CAN on Ubuntu?
|
We are using orocos on the robot we have been developing for 3 years now and I have the feeling that Orocos is no longer maintained and the mailing lists are not only no longer active but also it is no longer possible to subscribe to them.
Do you have more information about the state of orocos? Is it a dead project?
|
I have read ROS 2 document and they did not give many clues on the subject.
Does anybody have more insight on the subject?
|
I have a case of a differential drive robot and a control system in a two-dimensional environment:
Now the problem: we would like to make our particle move from point A to point B, and from there to point C (a spline-based path), in the most optimal time and constrain our acceleration, deacceleration, velocity, and angular speed.
How would one do this assuming that our control system allows us to control either velocity or acceleration?
The most important things here are names of mathematical methods behind this task and explanation of how to apply them.
|
I would like to understand, how to calculate the forces/what all forces act on an object that is currently being gripped by a suction gripper and when it moves. Gravity is one, which will cause the object being gripped to fall, the angular velocity will try to move the object away from the eef, the suction force will try to retain the picked object. Is there a formula that ties all of these together and form a threshold, beyond which the gripped object will definitely fall.
|
I am soon going to finish my undergrad in mechatronics engineering and wish to take up robotics for my masters. But I absolutely dislike any form of coding/programming. I am however,interested in the mathematics and physics behind the working of a robot. I'd also love to study about the control systems in a robot. So is coding a requirement if I want to take up robotics?
PS: I couldn't find the right tag for the question. Sorry! Please direct me to the right one if required.
|
I am building a self balancing bot using
MPU-6050
Arduino Mega 2560
100 rpm 12V DC motors
Currently I have used two 100 rpm DC motors in the bot. The pwm signal is given to the bot on the basis of the angle which the bot makes with the vertical(i-e angle of inclination).
Currently the bot isn't able to recover when tilted to about 40 degrees or when pushed hard.
As far as I understand the problem lies with the motors used. So, to make the bot more stable and to make it prevent on falling when pushed hard, what motors should be used ?
Will using stepper motors be better or will a DC motor with higher rpm rating (say 300 rpm) be better?
Stepper motors will provide certain advantages like rotations in it will be less affected by the variations in the voltage supply. Also stepper motors will provide more torque to balance the bot when tilted by larger angles.
But the stepper motors will cause mechanical vibrations, will this cause the bot to disbalance and hence making high rpm DC motors a better choice for it or still the stepper motors will be better (i-e the mechanical vibrations due to stepper motor won't have significant effects)
Please help me to select better motors for the bot,
300 rpm DC motors OR Stepper motors
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.