instruction
stringlengths 40
28.9k
|
---|
I am using an STM32-L432KC microcontroller to read data from an I²C Device (Bno-055) via DMA.
On a microcontroller without DMA i would try to minimize the IO-Wait by only reading the necessary registers of the device and only reading them as often as necessary (for example the calibration status only needs to be read every second etc.).
But with DMA i am thinking about just reading the first 64 registers (that's where the information is stored) at once and store them in memory. So my library is reduced to a little bit of HAL code in the interrupt and only some defines for the values. Furthermore reading values from the sensor is nothing more than reading some values from RAM which reduces IO-Wait by a lot.
Assuming i have enough memory what are problems that occour with this solution? What are the advantages of the "traditional" approach as opposed to my approach?
|
When I run
roslaunch turtlebot_gazebo turtlebot_world.launch
I am getting this error:-
> from defusedxml.xmlrpc import monkey_patch ImportError: No module
> named defusedxml.xmlrpc while processing
> /opt/ros/kinetic/share/turtlebot_gazebo/launch/includes/kobuki.launch.xml:
> Invalid tag: Cannot load command parameter [robot_description]:
> command [/opt/ros/kinetic/share/xacro/xacro.py
> '/opt/ros/kinetic/share/turtlebot_description/robots/kobuki_hexagons_asus_xtion_pro.urdf.xacro']
> returned with code [1].
Param xml is param command="$(arg urdf_file)" name="robot_description"/>
Param xml is The traceback for the exception was written to the log file
|
I am using a raspberry pi to practice low level development with a differential drive robot; bare metal without an OS underneath.
I've gotten to the point where I have a very small, single purpose kernel that drives a couple of DC motors, and reads an IR sensor over I2C with an Analog Digital Converter.
I have also started writing code for a PID controller: so far so good.
I use a bootloader over USB to UART to update the Pi with new versions of the kernel, and I also started looking at the Check unit test framework, so I can avoid too much trial and error
However, many of the online material about robotics use simulators (which makes great sense).
I am familiar with Gazeebo, and the one from GA Tech as well, and this lists a great number also
However I am having trouble understanding how I can use my C code for bare metal on one of these simulators?
|
I have purchased a couple of nifty little planetary gear motors and would like to incorporate them into my next project. could anyone help identify whether the connector is a commonly used one an whether i could purchase an adapter for to make it easy to engage with other custom parts? i have a 3d printer but it is only an FDM one and not quite precise enough to print the part myself. Thanks.
|
I am a beginner in robotics and I have just come across the concepts of differential and steer drive. I don't understand them very clearly yet.
I found the following write up describing differential drive as follows:
If the angular velocities are identical in terms of both values and
direction, i.e. if both the wheels are driven at the same speed and
same direction (either clockwise or anticlockwise) then the robot
tends to spin around its vertical axis. This complete turn capability
is one of the greatest advantages of a differentially driven robot
(a.k.a zero radius turn).
Being a complete beginner, I concede I could be entirely incorrect in disagreeing with the above.
I just imagine that "both wheels" driving at the same speed and direction (clockwise or anticlockwise) would mean a robot is travelling along linearly i:e straight ahead.
I don't understand why the writer says the robot is actually spinning around it's Z axis.
If I'm wrong, please explain what I'm missing.
And please, let me have links to material that would help me understand differential and steer drive.
Thanks in advance.
|
I have tried SimScape and gazebo but I did not get a solution for simulation of a swimming pool. Can somebody help with the same? Do I have to create the whole swimming pool using simple rectangular shapes and all? How to add water to it?
|
I am trying to have a Kalman-Filter (or Extended-KF) give me positions for a small remotely controlled vehicle with an Ackermann steering geometry (moving on a plane surface). The control commands I can send to the vehicle are wheel speed and steering angle. To measure the position I have available an Inertial Measurement Unit, gyroscope (electronic), odometry and in the future a indoor positioning system. Trying to set up the filter, I am already stuck with the prediction phase. I assume my state vector is something like
$
x=\begin{bmatrix}
x\\
v_x\\
a_x\\
y\\
v_y\\
a_y
\end{bmatrix}
$
and having read an introduction for Kalman filters I think that I should get the new state by doing
$x_k = A x_{k-1} + B u_{k-1}$
where A is the predictor matrix, and $u_{k-1}$ the control input which I assume to be
$
u=\begin{bmatrix}
\omega\\
\alpha
\end{bmatrix}$
, where $\omega$ would be the wheel speed and $\alpha$ the steering angle. I guess A is something like
$
A=\begin{bmatrix}
1 & \Delta t &\Delta \frac{1}{2}t^2 & 0 & 0 & 0\\
0&1&\Delta t &0 &0 &0\\
0&0&1 &0 &0 &0\\
0&0&0 &1 & \Delta t &\Delta \frac{1}{2}t^2\\
0&0&0 &0 &1 &\Delta t\\
0&0&0 &0 &0 &1\\
\end{bmatrix}
$
However, I don't really know how I should deal with the control input. For once, the velocity change due to a stearing angle $\alpha$ is not linear, is this a problem? Also, my control input for the wheel speed is basically the velocity already present in A. Should I then only use the wheel speed change as control input?
Furthermore: I get updates form my sensors every 10 ms. I wonder how fast the motor will react to my steering commands. Could it be that it makes sense to neglect the control inputs altogether?
Thank you for your help!
|
I have 2x 350W wheel-hub motors taken from a self-balancing scooter, or hoverboard. My intention is to use the two of them to drive a homemade R2-D2. However, some difficulties have been encountered that make it seem as though I am missing something trivial.
The ESC in question is a Hobbyking Quicrun 16BL30, driven by an HJ Servo Tester. Power supplies used include a 7.2V 1200mAh Ni-Cd battery, a 9.6V 900mAh Ni-Cd battery, and the 12V 18A rail of an ATX power supply. In testing configuration, the wheel-hub motor is clamped to the desk (no physical load). The ESC was programmed to the following values:
Running Mode: Fwd/Rev.
Drag Brake Force: 0%.
Low Voltage Cutoff: Disable.
Start Mode (Punch): Level 5.
Max. Brake Force: Disable.
Max. Reverse Force: 100%.
Initial Brake Force: 0%.
Throttle Range: 9% (Normal).
Timing: 11.25 deg.
Upon calibration and power-up, everything goes fine. However, driving forward/backward, the motor only runs for 3 seconds before stopping. The ESC light remains indicative of a load in that direction. When returned to neutral, then to run, it will go again. If done 3x or 4x, the ESC light will remain red when returned to neutral (indicative of an incorrect signal, or faulty connection to the motor). The ESC only needs reset to allow for 3s drives again. If alternated between forward and neutral at 1s intervals, it will continue running.
The instructions, manufacturer forums and google have failed to reveal another user encountering this problem. In a prior session, before accidentally resetting the ESC to all-default values, the wheel functioned perfectly for a few minutes. I feel as though I must be missing something trivial.
What am I missing?
|
I purchased one of these motor drivers and noticed that i was not getting as much time from my battery as i had expected. I tested the current draw when connecting my motor directly to the battery and then when connected via the controller. the results were disappointing, i got 70mA direct connection and 144mA using the controller so more than double the current! i then tried through a mosfet and that was good, 73mA.
If i want to get a more efficient circuit am i going to have to build an h-bridge myself or am i just using a rubbish controller? Thanks
|
I'm trying to find the most simplified (though accurate) calculations for the dynamic model of a standard 6DOF serial robot (PUMA robot).
I found this great paper:
https://www-cs.stanford.edu/group/manips/publications/pdfs/Armstrong_1986.pdf
The paper contains the full equations with 28 constants of the system, and it specifies the calculation for each constant.
However, I have a problem regarding the 6th link. It has the inertia symmetric values - Ixx6 Iyy6 Izz6, but Iyy6 never appears in any constant.
The specific robot in the paper (and actually it should be for every robot) has a 6th link that Ixx6 = Iyy6, but the same is true for Ixx5 and Iyy5 and still they both appear in the constants.
Do you think this is a mistake?
I need the exact equations has I want to expand them to the case of a payload - I want to replace m6 and I6 with updated values that include the payload mass and transferred inertia and then the equilibrium Ixx6 = Iyy6 will not necessarily exist.
Do you know of any other source of the full equations? Or maybe you have a solution to understand which of the Ixx6 relates to Ixx6 or Iyy6?
|
I am trying to design a very rigid tilting mechanism for my router. For this purpose I had a look at table saws, which seem to be supported by 2 mechanisms, one in front, one at the back, called "trunnions".
Such a trunnion consists of part of a circle, with a circular precision ground track, and a mating mart. The part of the circle is as such that the center of rotation coincides with the surface of the table.
However, when looking for "trunnion", I find medieval cannons as well as brand/product-specific constructions, which seem rather expensive. Does such a mechanism have other names? The router in question is a 20kg model, just as an indication of size. My last attempt was "circular track bearing", but i doubt that i am looking in the right direction.
Could anyone provide a pointer?
Thanks!
|
A robotic system is moving a getting its current location using visual odometry (incremental estimation of rotation and translation). To correct its localization, a loop closure technique may be added. However, this technique based on image comparison is costly. I want to find a way to only use this technique of loop closure only when a potential loop closure is present.
Is there a way to use the $x$ and $y$ coordinates of the vehicle provided by visual odometry to say that based on their values a loop closure may have occured?
|
I'm assuming that since torque translates into acceleration, the basic transfer function from torque to position becomes
$1/s ^ 3$
Does this mean that 3 pid controllers are required to properly control the process? ie acceleration, velocity and finally position? Perhaps a two stage position+velocity controller can decently aproximate the solution, but from mathematical standpoint, how many stages are actually needed for optimal control?
|
I've been investigating S-Curve motion profiles for CNC router and 3D printer applications, and haven't come across any definitive source that says an S-Curve profile is necessary in any application. And in fact, it may even slow down a print.
This simulator compares S-Curve and trapezoidal motion profiles, showing that with the same velocity and acceleration, a move will take longer to complete. Interestingly, the maximum acceleration of the S-Curve profile can be increased a small amount while keeping the maximum positional error (and force) beneath that of the trapezoidal profile. This is because the slow ramp in acceleration allows the driven object to accelerate closer to the target speed, thus reducing the overall force.
In theory, an instantaneous change in force will also excite natural vibrations in a system as well (in the same way connecting an inductor/capacitor directly to a 12v source induces ringing). Is this a problem in real-world mechanical systems? And in those systems, would using an S-Curve profile reduce the ringing?
|
I am working on a project in which i have a robot working with stepper motors. I wish to control this robot with simulink. How can i convert the input( Analog sin input ) into the PWM that can be used as input for the system???
The Robot is Arduino2560 Based.
|
I am trying to compute the relative pose between two cameras using their captured images through the usual way of feature correspondences. I use these feature matches to compute the essential matrix, decomposing which results in the rotation and translation between both the views. I am currently using David Nister's 5 point algorithm to compute the essential matrix and subsequently the relative pose. Once I compute this relative pose:
How can I find the uncertainty of this measurement? Should I try to refine the essential matrix itself (using the epipolar error), which results in the essential matrix's covariance and is it possible to find the pose covariance from this? Or is there another way to find the uncertainty in this pose directly?
There is also another issue in play here: While I am computing the relative pose of camera 2 (call it $P_2$) from camera 1, the pose of camera 1 (say $P_1$) would have its own covariance $\Sigma_1$. How can I consider the effect of this on the covariance of $C_2$ ($\Sigma_2$)?
|
I am currently trying to register a pointcloud in time to find my change in position and heading at each timestep (High speed application). So this is essentially an implementation of SLAM. I am currently using ICP with an SVD rotation solver to try to find rotation and translation. This solution works with simulated pointclouds.
The issue is that reobservation of previous points is non-deterministic for the type of scanner I am using. So this makes neighbor matching between frames difficult.
Is there any prepossessing I can do to get better matches in the neighbor finding step? or are there other methods for pointcloud registration that are more robust to noise than SVD based ICP?
|
I’ve developed a basic robot that navigates around my son’s Duplo train track. It works find on the flat parts, but once it reached an incline at the start of a bridge, the wheels just spin and the robot stays at the bottom of the incline.
I’ve 3D printed most of the parts using PLA for everything, but printed tyres using TPU. The tyres definitely help, as the wheels were originally spinning even on the flat. Moving the batteries (2xAA) to sit over the drive axle also helped.
What else can I do to increase the traction and give my robot a chance of making it up the hill?
|
So I thought I understood well enough what a Jacobian was (in the context of an $n$-DOF robot) -- a function that takes a vector of n joint positions and returns an $n \times n$ matrix that can be multiplied with a vector of $n$ joint velocities to return a velocity vector for the end effector.
I'm using ROS and MoveIt, so I actually already have a function to calculate the Jacobian for my robot from the URDF.
However, I'm reading lecture notes from the 2005 MIT Intro to Robotics course, and in one (mission-critical, it seems) portion of chapter 7 (between pages 11 and 12), he refers to "$3 \times n$ Jacobian matrices relating the centroid linear velocity and the angular velocity of the $i^\text{th}$ link to joint" as $J^L$ and $J^A$.
He introduces Jacobians in Chapter 5, and indeed I looked through all the rest of the course material, and I don't think he ever explains what these matrices are or how to compute them.
Could someone enlighten me as to what he's talking about?
|
I'm working on a particle filter implementation in Matlab and I found one very good in the Robotics Toolbox (available here https://github.com/petercorke/robotics-toolbox-matlab). My problem is that I really don't know how to modify it in order to use it with a Laser scanner, I know that with a know map the laser beam will give the distance and angle (kind o like RangeAndBearing) but I haven't understood well how the likelihood field of the map fits into this.
In the end my question is how can I get the RangeAndBearing Measurement from a Laser scanner in order to use it in the particle filter?
Thanks in advance
|
when I run roslaunch mavros px4.launch command process start and shut down.
> ... logging to /home/yograj/.ros/log/77663396-f411-11e7-b93c-3c77e68de09b/roslaunch-yograj-Inspiron-5537-12518.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
>
> started roslaunch server http://yograj-Inspiron-5537:40223/
>
> SUMMARY
>
> ========
>
> CLEAR PARAMETERS
>
> * /mavros/
> PARAMETERS
>
> * /mavros/cmd/use_comp_id_system_control: False
> * /mavros/conn/heartbeat_rate: 1.0
> * /mavros/conn/system_time_rate: 1.0
> * /mavros/conn/timeout: 10.0
> * /mavros/conn/timesync_rate: 10.0
> * /mavros/distance_sensor/hrlv_ez4_pub/field_of_view: 0.0
> * /mavros/distance_sensor/hrlv_ez4_pub/frame_id: hrlv_ez4_sonar
> * /mavros/distance_sensor/hrlv_ez4_pub/id: 0
> * /mavros/distance_sensor/hrlv_ez4_pub/orientation: ROLL_180
> * /mavros/distance_sensor/hrlv_ez4_pub/send_tf: True
> * /mavros/distance_sensor/hrlv_ez4_pub/sensor_position/x: 0.0
> * /mavros/distance_sensor/hrlv_ez4_pub/sensor_position/y: 0.0
> * /mavros/distance_sensor/hrlv_ez4_pub/sensor_position/z: -0.1
> * /mavros/distance_sensor/laser_1_sub/id: 3
> * /mavros/distance_sensor/laser_1_sub/orientation: ROLL_180
> * /mavros/distance_sensor/laser_1_sub/subscriber: True
> * /mavros/distance_sensor/lidarlite_pub/field_of_view: 0.0
> * /mavros/distance_sensor/lidarlite_pub/frame_id: lidarlite_laser
> * /mavros/distance_sensor/lidarlite_pub/id: 1
> * /mavros/distance_sensor/lidarlite_pub/orientation: ROLL_180
> * /mavros/distance_sensor/lidarlite_pub/send_tf: True
> * /mavros/distance_sensor/lidarlite_pub/sensor_position/x: 0.0
> * /mavros/distance_sensor/lidarlite_pub/sensor_position/y: 0.0
> * /mavros/distance_sensor/lidarlite_pub/sensor_position/z: -0.1
> * /mavros/distance_sensor/sonar_1_sub/id: 2
> * /mavros/distance_sensor/sonar_1_sub/orientation: ROLL_180
> * /mavros/distance_sensor/sonar_1_sub/subscriber: True
> * /mavros/fake_gps/eph: 2.0
> * /mavros/fake_gps/epv: 2.0
> * /mavros/fake_gps/fix_type: 3
> * /mavros/fake_gps/geo_origin/alt: 408.0
> * /mavros/fake_gps/geo_origin/lat: 47.3667
> * /mavros/fake_gps/geo_origin/lon: 8.55
> * /mavros/fake_gps/gps_rate: 5.0
> * /mavros/fake_gps/mocap_transform: True
> * /mavros/fake_gps/satellites_visible: 5
> * /mavros/fake_gps/tf/child_frame_id: fix
> * /mavros/fake_gps/tf/frame_id: map
> * /mavros/fake_gps/tf/listen: False
> * /mavros/fake_gps/tf/rate_limit: 10.0
> * /mavros/fake_gps/tf/send: False
> * /mavros/fake_gps/use_mocap: True
> * /mavros/fake_gps/use_vision: False
> * /mavros/fcu_url: udp://:14540@127....
> * /mavros/gcs_url:
> * /mavros/global_position/child_frame_id: base_link
> * /mavros/global_position/frame_id: map
> * /mavros/global_position/rot_covariance: 99999.0
> * /mavros/global_position/tf/child_frame_id: base_link
> * /mavros/global_position/tf/frame_id: map
> * /mavros/global_position/tf/global_frame_id: earth
> * /mavros/global_position/tf/send: False
> * /mavros/global_position/use_relative_alt: True
> * /mavros/image/frame_id: px4flow
> * /mavros/imu/angular_velocity_stdev: 0.000349065850399
> * /mavros/imu/frame_id: base_link
> * /mavros/imu/linear_acceleration_stdev: 0.0003
> * /mavros/imu/magnetic_stdev: 0.0
> * /mavros/imu/orientation_stdev: 1.0
> * /mavros/local_position/frame_id: map
> * /mavros/local_position/tf/child_frame_id: base_link
> * /mavros/local_position/tf/frame_id: map
> * /mavros/local_position/tf/send: False
> * /mavros/local_position/tf/send_fcu: False
> * /mavros/mission/pull_after_gcs: True
> * /mavros/mocap/use_pose: True
> * /mavros/mocap/use_tf: False
> * /mavros/odometry/estimator_type: 3
> * /mavros/odometry/frame_tf/desired_frame: ned
> * /mavros/plugin_blacklist: ['safety_area', '...
> * /mavros/plugin_whitelist: []
> * /mavros/px4flow/frame_id: px4flow
> * /mavros/px4flow/ranger_fov: 0.118682389136
> * /mavros/px4flow/ranger_max_range: 5.0
> * /mavros/px4flow/ranger_min_range: 0.3
> * /mavros/safety_area/p1/x: 1.0
> * /mavros/safety_area/p1/y: 1.0
> * /mavros/safety_area/p1/z: 1.0
> * /mavros/safety_area/p2/x: -1.0
> * /mavros/safety_area/p2/y: -1.0
> * /mavros/safety_area/p2/z: -1.0
> * /mavros/setpoint_accel/send_force: False
> * /mavros/setpoint_attitude/reverse_thrust: False
> * /mavros/setpoint_attitude/tf/child_frame_id: target_attitude
> * /mavros/setpoint_attitude/tf/frame_id: map
> * /mavros/setpoint_attitude/tf/listen: False
> * /mavros/setpoint_attitude/tf/rate_limit: 50.0
> * /mavros/setpoint_attitude/use_quaternion: False
> * /mavros/setpoint_position/mav_frame: LOCAL_NED
> * /mavros/setpoint_position/tf/child_frame_id: target_position
> * /mavros/setpoint_position/tf/frame_id: map
> * /mavros/setpoint_position/tf/listen: False
> * /mavros/setpoint_position/tf/rate_limit: 50.0
> * /mavros/setpoint_velocity/mav_frame: LOCAL_NED
> * /mavros/startup_px4_usb_quirk: True
> * /mavros/sys/disable_diag: False
> * /mavros/sys/min_voltage: 10.0
> * /mavros/target_component_id: 1
> * /mavros/target_system_id: 1
> * /mavros/tdr_radio/low_rssi: 40
> * /mavros/time/time_ref_source: fcu
> * /mavros/time/timesync_avg_alpha: 0.6
> * /mavros/time/timesync_mode: MAVLINK
> * /mavros/vibration/frame_id: base_link
> * /mavros/vision_pose/tf/child_frame_id: vision_estimate
> * /mavros/vision_pose/tf/frame_id: map
> * /mavros/vision_pose/tf/listen: False
> * /mavros/vision_pose/tf/rate_limit: 10.0
> * /mavros/vision_speed/listen_twist: False
> * /rosdistro: kinetic
> * /rosversion: 1.12.12
>
> NODES
> /
> mavros (mavros/mavros_node)
>
> ROS_MASTER_URI=http://localhost:11311
>
> process[mavros-1]: started with pid [12536]
> [FATAL] [1515375213.809935923]: UAS: GeographicLib exception: File not readable /usr/share/GeographicLib/geoids/egm96-5.pgm | Run install_geographiclib_dataset.sh script in order to install Geoid Model dataset!
>
> ================================================================================
>
> REQUIRED process [mavros-1] has died!
> process has finished cleanly
> log file: /home/yograj/.ros/log/77663396-f411-11e7-b93c-3c77e68de09b/mavros-1*.log
> Initiating shutdown!
>
> ================================================================================
>
> [mavros-1] killing on exit
> shutting down processing monitor...
> ... shutting down processing monitor complete
> done
|
How much weight can following Brushed Motors carry together?
4X 12 Volt 100rpm supplied voltage => (5/17)*12
2X 12 Volt 100rpm supplied voltage => (7/17)*12
These motors are basically the wheels for my battle bot.
Here is the link to the Motor.
I am new to this so please tell me if I'm wrong somewhere.
|
I run the following command to install GeographicLib datasets for mavros inside the catkin_ws directory (catkin workspace).
./src/mavros/mavros/scripts/install_geographiclib_datasets.sh
This script require root privileges!
but I get that message in return. what does it mean and how to get the above mavros dependency(GeographicLib datasets) installed successfully?
Before this command, I followed the source installation process as mentioned here.
|
I've seen this equation for calculating the dynamics of a robotic arm a bunch:
$\boldsymbol{\tau} = \boldsymbol{M}(\boldsymbol{q})\ddot{\boldsymbol{q}} + \boldsymbol{C}(\boldsymbol{q},\dot{\boldsymbol{q}})\dot{\boldsymbol{q}} + \boldsymbol{G}(\boldsymbol{q})$
Now, I believe I have the ${M}$ and ${G}$ terms calculated properly (though not through single matrices, which perhaps is an error in itself) as well as a reasonably good PID controller, so I've been researching how to get ${C}$, which represents both centrifugal and Coriolis effects. My robot is pretty unstable without it, but I cannot figure out how to compute it. I don't have access to MATLAB; I'm using C++ with ROS and MoveIt!, so I can easily get the Jacobians and many other features of my robot.
Can anyone help me out? Everyone seems to just be saying along the lines of "Now calculate ${C}$..."
|
I have seen a lot of academic papers, but no framework that I can download and use. There are even projects, such as OpenCog, which might be usable in a few years.
I would like to know of the existence of any open-source goal/cognitive/planning frameworks now.
There is a lot of software for motion planning, but I would like a way to determine where the robot wants to do before planning how to get there.
|
I have built a simple robot which will move from a source to destination.
I want to record a 2d path which the robot has taken in real time.
Something which is similar to the image I have take.(similar not exactly the same)
Is there any tool or software which will allow me to do this?
If no, is it possible to generate a similar map not in real time, but after the robot has reached its destination.
Like the robot while traversing will send out information like straight,straight, left left, straight, right, straight and so on.
Can this data be saved and a map generated? The map can be as simple as just the path of the robot.
The robot is simple one built using raspberry pi, arduino. It has components like camera, ultrasonic sensor, servo motor etc.
|
I have never built a robot before and I am looking to get started. The problem is that I have no idea what parts I will need... below are a list of things I want the robot to do.
Have a speaker
Be able to sense objects around him so that he doesn't bump into them.
Understand different commands
So pretty much I want to be able to tell my robot to go to the kitchen and he could move over there and tell the person in the kitchen what that person wants. The person would then put it on the robot and he would drive back to the person that made the request. (I know there is not much use to this robot but I think it would still be pretty cool)
I am not worried about learning programming languages so it doesn't matter what languages the board supports. (Have some basic knowledge in Java and Python).
Thanks!
|
Many articles reference algorithms such as A*, PRM or RRT based planners to motion planning algorithms which seems unreasonable since it is still necessary to parametrize found path with time.I wonder, why? However it is not the main problem.
The issue is that of generating motion profile based on a given path. If there were small number of points, maybe 3 or 4, it would be relatively easy to generate some type of speed profile to follow it. Unfortunately output of A* or any of the mentioned algorithms can have large number of points. Therefore it is not clear how to built motion profile based on given points. Could anyone explain to me please how it should be done?
|
Motor
I am going to use these 100rpm 12Volt Motor supplied Power given below.
Surface
Surface will be Plain Wooden. Area of contact between 6 Tyre and surface would be maximum of 4cm2 in total.
Use
They are basically Tyre for my Battle Bot
Direction
All 6 Motors will work in same direction to move my Bot Forward. Not climbing nor Coming down.
Tyre
I'm going to use these Tyre but with different diameter for each Motor (Information given Below).
Structure & Design
Here's a Scaled 3D Sketch of my Battle bot. In case you need it.
Voltage & Tyre Diameter
4 Motors with 7cm Diameter and Power Ratio is 5 (each).
2 Motors with 5cm Diameter and Power Ratio is 7 (each).
What I want is the Relation Between Supplied Voltage, Supplied Current, Weight of Bot and the Speed at which Bot can move.
I'm new to all this so Please tell me if you need more information about anything or if I have gone Wrong Somewhere.
|
I have a geared DC motor with hall speed sensors. I want to count signals from the sensor to get position of the motor. Hall sensor has resolution of 12 CPR. Gear ratio is 1:810, which means that I don't really need very precise measurement to get close to the desired position.
But in reality it doesn't work. I run the motor at the same direction trying to get exactly 360 degrees. Sometimes I am close to the position I set, sometimes I am too far (like 300 degrees instead of 360).
The code is simple. I just attached the hall sensor to an interrupt pin and count raises of the signal.
Has anybody tried that? Does anybody knows some obvious problem which causes me to fail such miserably?
Here is the motor I am using:
|
There are many control methods who sounds great and have nice math etc. But many of them are not realistic in real life.
In robotics, Adaptive Control is well applied because Adaptive Control make sure that the controller is allways new and perfect for the robot.
But then there are another control method called Robust Control. In fact, Robust Control is only a design method. It's still a LQR/LQG/PID inside, but the parameters are designed so the controller has the best strength aginst uncertainties.
Then my question is: Is Robust Control well applied in robotics, or is Robust Control just another design method who does not work in reality, only in ideal simulation?
|
I wrote an Arduino program for my quadcopter that sets to power of the 4 ESC's using software. Now I need to incorporate the gyro and add a "Complementary Filter" to stabilize it.
I am currently using a MPU 6050 for the job.
I primarily want to know what is the best approach to take for compensating the angular rate from the receiver using gyro.
Should i give the use the angle from my receiver and program my quad works to achieve that angle or should I try to control angular rate?
e.g
angular rate:-
receiver roll stick moves produces an angular rate that my PID compares with angular rate of gyro and alters values for the esc's.
e.g
angle:-
receiver roll stick moves produces an angle that my PID compares with the ANGLE of gyro+Accelerometer and then alters my ESC values
|
I have a basic question. Imagine a plate, with a hole drilled in each corner. Imagine 4 shafts in these 4 holes,attached to another plate.
Intuition tells me that it would take considerable effort to have the design as such that the plate slides without getting stuck.
Let's now replace one of those shafts by a leadscrew. Designs exist where the leads crew would allow to smoothly upper and lower said plate.
My intuition, however, indicates that I could never get such a conceptually simple design to work. Even with plain bearings.
What are the conditions to get this right?
|
I want to control a motor using EtherCAT protocol.
1. I used IgH ethercat master to control motor in hard real time (1 kHz).
2. Later I want to integrate this in ros_control with hard real-time control. The developer of ros_control are ensuring that ros_control is real time.
3. If I want to create other application using ros/moveit/ros-android on top of ros_control & ethercat(driver), Can we call the system hard realtime (ros_control + igh (both are hard realtime) and ros (not realtime)) ?
|
We are currently making a balancing robot as a school project.
The robot has a gyroscope and an encoder to get the angle of the robot and the rpm of the wheels. By using the current angle of the robot and defining a target angle we are able to make the robot stand still and not fall over.
We don't understand how to incorporate the encoder (that measures the angular velocity of the wheels) to make the robot stand still when he motor speed is changing. The robot only knows the angle it is at so it won't compensate for the speed. Could one perhaps use the encoder in such a way that dictates the target angle and keep the robot still or even better control it?
We would also like to know if there is a way to automatically adjust TargetAngle to the needed one, like if the floor angle were to change.
Here is a sample of our code:
error = TargetAngle - CurrentAngle;
integral = integral + error*ki;
derivative = (error - lastError)*kd;
lastError = error;
pidsum = derivative+error*kp+integral;//this is what is used for throttling the engines
|
I don't understand how to calculate the ICC position with the given coordinates. I somehow just have to use basic trigonometry but I just can't find a way to calculate the ICC position based on the given parameters $R$ and $ \theta $.
Edit: Sorry guys if forgot to include the drawing of the situation. Yes, ICC = Instantaneous Center of Curvature.
|
I am new to robotics and mechanical engineering. I am trying to understand how the Z-axis (and effector) of a SCARA robot can be driven by a stepper motor with it's shaft coaxially aligned along the same Z axis. In the photo below, one can clearly see the Z-axis lead screw and effector on the SCARA arm. I am familiar with lead screws in X-Y stages, where a block is pushed along the length of the screw by rotation. In this case, the lead screw itself is being translated.
(1) What mechanism can be used to cause the lead screw itself to translate along the Z-axis?
(2) How can the effector (say a robotic claw) be prevented from rotating while it is in motion?
I have seen so-called "through type" stepper motors (see below) in which the lead screw moves right through the motor. Since this type of motor seems less prevalent on markets such as Amazon, and more expensive, I would like to use a standard stepper motor and actuate the lead screw mechanism using a belt. This stepper motor will have it's shaft oriented along the Z axis (and will be "coaxial" with the effector arm).
|
For an accelerometer, the measurement is defined in the following way:
$$a_m = R_w^b(a_{w} - g) + b_a + v_a$$
Where $R$ is a rotation matrix, $g$ is gravity, $v_a$ is noise, and $b_a$ is the bias. With this, we can expose the bias and orientation error by rotating $g = \begin{bmatrix} 0 & 0 & 9.8 \end{bmatrix}^T$ and adding the bias:
$$h(x) = Rg + b_a$$
However, for a gyroscope, we don't have anything like gravity to expose this bias in the same way. How is this usually done?
|
Some time I found that some robot controller uses a critically damped reference control signal such as in Autonomous Surface Vehicle (ASV) heading controller. My question is how do we implement this reference signal in the actual robot implementation?
See figure .
|
I'm writing an academic paper describing the calibration of a system. Part of that system is an absolute rotary encoder measuring the angle of a joint. However, the position at which the encoder gives a reading of '0' does not correspond to an angle of zero degrees (see edit). Rather than fussing with how the encoder is mounted, I simply found the value that the encoder returns when the joint is at zero degrees and subtract that value off of future measurements.
Is there a well-established* term to describe this point/value succinctly and, if so, what is it?
I have to imagine that this is an extremely common task in robotic manipulators and I'm having a hard time describing the process without a name for the point/value/position that I'm calibrating for. Zero point? Home position? Origin? Encoder offset?
*By well established, I simply mean something that someone familiar with the field of robotics will understand to mean what I'm trying to describe. My primary field is medical imaging/image guidance so I have no idea what terminology is or is not commonly understood.
Edit: 'zero degrees' meaning 'zero degrees with respect to the arm'. A reading of 0 indicates, by definition, zero degrees with respect to an arbitrary position on the encoder.
|
I am using a PNP algorithm to compute the rotation and translation of a camera given pre-mapped 3D points, and their corresponding projections on the 2D plane. The algorithm I am using is the one described in a CVPR 2017 paper, with implementations in both OpenCV and OpenMVG (I am using the latter).
Strangely, I am noticing a precise 'drift' in the position computed, that changes with the rotation angles. I.e.: If I hold the camera's position constant and just rotate it, according to the OpenCV coordinate convention, changes in pitch cause the position to drift in the Y direction, and changes in yaw cause the position to drift in the X direction (I have not tested roll yet.) When I tried to fit a curve to this dataset of pitch vs Y and yaw vs X values, I noticed a pretty constant variation. After removing scale factors and converting the angles into radians, this is the profile I see:
\begin{eqnarray}
X \approx -4.0 * \psi \\
Y \approx 4.0 * \theta
\end{eqnarray}
My translation units in this equation are meters but not true world units, because I removed the scale factor; angles are in radians. In true world units for the environment I tested in, the factor of 4.0 ended up being a factor of 20.0. Is this some sort of a commonly known transformation I am failing to account for? It's linear, so I am assuming it cannot be dependent on the rotation angle. I have skimmed through the paper and some relevant resources but I am unable to figure out the cause of this relationship.
Curve fitting results from MATLAB:
|
first question in this forum :D
so i started creating a 2 link robotic arm in SIMULINK and made a function that builds its trajectory (cubic spline) and used it as an input for my joint.
so my input was position and output was the torque etc.
Now i would like my input to be torque,how do i go about it?
should i create a transfer function out of the dynamic equation OR can i use the trajectory in someway OR is there a simple way?
If you have an answer i would appreciate if you can send a detailed explanation on how you can do that in SIMULINK
|
My motor setup is vibrating too much for my intended project.
Here are my assembled parts :
ClearPath® - Integrated Servo System motor, p/n CPM-MCVC-2310S-RQN
two Actobotics 15 Tooth, Flanged XL timing pulley, p/n 615432
one Actobotics 3/8” wide, 10" long, 1/5 pitch XL timing belt, p/n B375-100XL
The motor is running under the velocity controlled option and is spinning 0-4000 rpms depending on a knob setting.
A few ideas: I was reading that with timing belts a larger pinion diameter could lessen vibrations. Should I search for something larger? The current pinions are around 1" pitch. Also maybe my tension on the belt is not right? I'm not sure how to measure this or what tension I'm looking for. The assembly has slots, so I can move by hand to make the belt tighter or looser.
I'm far from an expert and this is my first project. Any advice would be much appreciated.
|
I'm trying to implement a simple ROS node to perform Moving Least Squares filtering on a sensor_msgs/PointCloud2 topic.
I'm following this PCL tutorial, which uses the pcl/surface/mls.h file.
My code is at this GitHub page, but replicated below;
#include <ros/ros.h>
// PCL specific includes
#include <sensor_msgs/PointCloud2.h>
#include <pcl_conversions/pcl_conversions.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <pcl/io/pcd_io.h>
#include <pcl/kdtree/kdtree_flann.h>
#include <pcl/surface/mls.h>
/**
* Simple class to allow appling a Moving Least Squares smoothing filter
*/
class MovingLeastSquares {
private:
double _search_radius;
public:
MovingLeastSquares(double search_radius = 0.03)
: _search_radius(search_radius)
{
// Pass
};
ros::Subscriber sub;
ros::Publisher pub;
void cloudCallback (const sensor_msgs::PointCloud2ConstPtr& cloud_msg);
};
/**
* Callback that performs the Point Cloud downsapling
*/
void MovingLeastSquares::cloudCallback (const sensor_msgs::PointCloud2ConstPtr& cloud_msg)
{
// Container for original & filtered data
pcl::PCLPointCloud2 cloud;
// Convert to PCL data type
pcl_conversions::toPCL(*cloud_msg, cloud);
// Convert to dumbcloud
pcl::PointCloud<pcl::PointXYZ>::Ptr dumb_cloud (new pcl::PointCloud<pcl::PointXYZ> ());
//pcl::MsgFieldMap field_map;
//pcl::createMapping<pcl::PointXYZ>(cloud_msg->fields, field_map);
//pcl::fromPCLPointCloud2<pcl::PointXYZ>(cloud, *dumb_cloud);
pcl::fromPCLPointCloud2<pcl::PointXYZ>(cloud, *dumb_cloud);
// Create a KD-Tree
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ>);
// Output has the PointNormal type in order to store the normals calculated by MLS
pcl::PointCloud<pcl::PointNormal> mls_points;
// Init object (second point type is for the normals, even if unused)
pcl::MovingLeastSquares<pcl::PointXYZ, pcl::PointNormal> mls;
mls.setComputeNormals (true);
// Set parameters
mls.setInputCloud (dumb_cloud);
mls.setPolynomialFit (true);
mls.setSearchMethod (tree);
mls.setSearchRadius (_search_radius);
// Reconstruct
mls.process (mls_points);
// Convert from dumbcloud to cloud
pcl::PCLPointCloud2 cloud_filtered;
pcl::toPCLPointCloud2(mls_points, cloud_filtered);
// Convert to ROS data type
sensor_msgs::PointCloud2 output;
pcl_conversions::moveFromPCL(cloud_filtered, output);
// Publish the data
pub.publish (output);
}
/**
* Main
*/
int main (int argc, char** argv)
{
// Initialize ROS
ros::init (argc, argv, "pcl_mls");
ros::NodeHandle nh("~");
// Read optional leaf_size argument
double search_radius = 0.03;
if (nh.hasParam("search_radius"))
{
nh.getParam("search_radius", search_radius);
ROS_INFO("Using %0.4f as search radius", search_radius);
}
// Create our filter
MovingLeastSquares MovingLeastSquaresObj(search_radius);
const boost::function< void(const sensor_msgs::PointCloud2ConstPtr &)> boundCloudCallback = boost::bind(&MovingLeastSquares::cloudCallback, &MovingLeastSquaresObj, _1);
// Create a ROS subscriber for the input point cloud
MovingLeastSquaresObj.sub = nh.subscribe<sensor_msgs::PointCloud2> ("/input", 10, boundCloudCallback);
// Create a ROS publisher for the output point cloud
MovingLeastSquaresObj.pub = nh.advertise<sensor_msgs::PointCloud2> ("/output", 10);
// Spin
ros::spin ();
}
This compiles fine for me, but I don't get any points coming through the output topic. Can anyone tell me what I'm doing wrong? I suspect it is something to do with the conversions at line 46 or line 74.
Thank you!
|
I am trying to understand the control of the quadrotor in 2 dimensions from the Penn course on aerial robotics. The attached image describes the linearization of the control efforts assuming that the quadrotor is near the hover position. Everything seems fine except Equation 7 to me. The control effort u1 makes sense as u1 = mg + mz¨c. The original equation z¨ = − g + u1*m is solved for u1, and z¨c (the actual z acceleration) is used.
However, the equation for u2 goes from φ¨ =u2/Ixx to u2 =Ixx*φ¨T (t), where φ¨T (t) is the desired value of φ acceleration instead of the actual value of φ acceleration.
Can anyone explain this to me?
EDIT: For further clarity, let me try rephrasing this question.
We start with three simplified, linearized equations:
$$
\ddot{y} = -g * \phi
$$
$$
\ddot{z} = -g + \frac{u_1}{m}
$$
$$
\ddot{\phi} = \frac{u_2}{I_{xx}}
$$
For $u_1$, we solve to get:
$$
u_1 = m\ddot{z} +mg
$$
and since our control effort is defined as:
$$
\ddot{r_c} = \ddot{r_T}(t) k_{v}(\dot{r_T}-\dot{r}) + k_{p}(r_T-r) +
$$
we plug in the Z part of this control effort and get
$$
u_1 = m\ddot{z} +mg
$$
$$
u_1 = \ddot{z_T}(t) + k_{p,z}(\dot{z_T}(t)-z) + k_{v,z}(\dot{z_T}(t)-\dot{z}) +mg
$$
In this case, when we took the Z part of the PD control effort, we replaced $\ddot{z}$ with the formula for $\ddot{z_c}$ (our control effort)
However, that is not what happens when solving for u2. We start off
$$
u_2 = I_{xx}*\ddot{\phi}
$$
This time, when we replace $\ddot{\phi}$, I would think it would be replaced the same way as above, using $\ddot{\phi_c}$ our control effort. So it would look like this:
$$
u_2 = I_{xx}*\ddot{\phi_c}
$$
$$
u_2 = I_{xx} [\ddot{\phi_T}(t) + k_{v,\phi}(\dot{\phi_T}(t)-\dot{\phi}) + k_{p,\phi}(\phi_T(t)-\phi)]
$$
Instead, we do this:
$$
u_2 = I_{xx}*\ddot{\phi_T}
$$
$$
u_2 = I_{xx} [\ddot{\phi_c}(t) + k_{v,\phi}(\dot{\phi_c}-\dot{\phi}) + k_{p,\phi}(\phi_c-\phi)]
$$
If we go back to our basic PD control effort formula, this contradicts it. With this replacement, we are saying that:
$$
\ddot{\phi_T} = \ddot{\phi_c}(t) + k_{v,\phi}(\dot{\phi_c}-\dot{\phi}) + k_{p,\phi}(\phi_c-\phi)
$$
which is not true because
$$
\ddot{r_c} = \ddot{r_T}(t) + k_{v}(\dot{r_T}-\dot{r}) + k_{p}(r_T-r)
$$
which means that the real solution for $\ddot{\phi_T}$ would be
$$
\ddot{\phi_T} = \ddot{\phi_c}(t) - k_{v,\phi}(\dot{\phi_c}-\dot{\phi}) - k_{p,\phi}(\phi_c-\phi)
$$
That is my confusion!
|
Hi im having problem with Inverse kinematic using RVC toolbox in matlab for RRR manipulator.
L_1 = 20;
L_2 = 50;
L_3 = 40;
L(1) = Link([0 L_1 0 pi/2]);
L(2) = Link([0 0 L_2 0]);
L(3) = Link([0 0 L_3 0]);
Robot = SerialLink(L);
Robot.name = 'RRR_Robot';
T = [1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1];
J = Robot.ikine(T, [0,0,0], [1 1 1 0 0 0]) *180/pi;
Error using SerialLink/ikine (line 164) Number of robot DOF must be >= the same number of 1s in the mask matrix
Can somone explain me why my code is not working? / how to fix it ?
|
I've used 4 sensors & a standard smart robot chassis kit with geared motors & L298 motor driver.
What is the difference between turning in:
One wheel stopped, one wheel moves.
Both wheels moves, but in opposite direction.
What is the best way to go in a curve?
What is the best way to turn 90°?
|
I am going to realize a real time monitoring system for medical care and finally the system should make phone calls according to the severity of the result , I would like to know is it best to perform realtime signal processing on raspberry or stm ,
the processed signal is ECG and the processing will include denoising , segmentation , feature extraction , classification . I am using an ECG acquisition sensor ad8232 and I found while doing my research that stm32 provides a dsp unit(digital signal processor) , that raspberry pi is not suitable for realtime activities because it has no timer and so u need to add other components , also I found real time operating systems such as RTAI https://www.rtai.org/ that should work on raspberry's os
what how can I choose ?
an other question is does working on the os of the raspberry make this count as an embedded project ?
|
So here is my problem. I need to grab an object with my robotic arm, but I don't know the position of the object. The only thing I know is the angle between object and gripper. You will maybe better understand better from this image.
So my question is, how do I do that my robotic arm will be approaching the possible position of object in straight line? I tried inverse kinematic but it didn't work, because I don't know position of the object although I tried set the position of the object to end of the gripper. So I think IK is not the right way.
|
I'm planning to setup ROS and it's dependencies in a conda environment (similar to virtualenv). I've read a few posts where they state it's not going to be simple. Did any of you guys do it ? If you did, can you please share the steps ? Thanks in advance!
|
I want to implement deep reinforcement learning using a UR5 robot. A little research told me nowadays researchers are using openAI gym, Mujoco, rllab as their frameworks. The thing is, I want to train my UR5 robot in virtual environment and transfer the learning to my real robot. The task can be peg insertion.
Previously I worked on a project using ROS with the UR5 robot and a Kinect. In that project, we are using MoveIt for motion planning. Everything has already been set up and I'm quite familiar with ROS and MoveIt.
Do you think I should bother myself modeling the UR5 robot and adding RGBD camera using Mujoco (I am not sure if I can do this in Mujoco), and after that training it using openAI gym framework, and finally transferring learning to the real robot using ROS? Or should I stick with ROS and MoveIt while incorporating openAI gym only as my reinforcement learning framework? I just don't see why I need Mujoco if I can use MoveIt and gazebo for modeling, which is already done. Thanks for your advice on this.
|
To my understanding appearance-based SLAM uses only information coming from sensors for mapping and localization. It completely discards control information from actuators. I'm I correct?
|
I am developing a GPS-Localizer using accelerometer and gyroscope sensor values. for more accuracy, i want to calculate sensor biases, so i already implemented the accelerometer calculation via matlab using an input vector $ a$ and it looks like this:
function [sigma_out, mean_out, bias_out] = aCalibration (a)
sigma_out = std(a, 0, 2);
mean_out = mean(a, 2);
% The direction of the mean acceleraion is assumed to be the direction
% of gravity (thus down). Any other component is considered bias. This
% is by no means an acurate representation of the biases.
bias_out = mean_out - mean_out / norm(mean_out) * 9.81;
For the Gyroscope i use an input vector $w$, so i calculate standard deviation and mean like this:
function [w_sigma, w_mean, w_bias] = wCalibration (w)
% TODO: implement the sigma/mean/bias computation
w_sigma = std(w,0,2);
w_mean = mean(w,2)
w_bias = [0; 0; 0];
How do i calculate the gyroscope bias $b_w$ now? i dont know i think i still need to consider some angular velocity constants but i dont know which ones.... :(
Thanks for the help!
|
In generally I know that no difference between 2WD and 4WD mobile base control kinematics equations.
but is there really difference between two wheels differential mobile base and tracked/continuous tracks differential robot (Tank Treads) inverse kinematics equations?
I using this inverse kinematics of differential equation:
Which 'r' is wheel radius, 'd' is robot width, 'v' is linear velocity, 'w' is angular velocity, and thetaR and thetaL are left/right wheel RPM that you want them.
For autonomous drive, I use DWA Planner that send v, w to my
driver which inverse kinematics is here, and my driver calculate
thetaR/thetaL of each posted v,w.
For manualy driving, I use many of teleop node, that all of them sends v, w too.
|
over the past weeks I was trying to figure out how use the information from hall sensor to control a DC motor. Basically this is the motor which I'm using: https://www.servocity.com/32-rpm-hd-premium-planetary-gear-motor-w-encoder I chose this motor because torque is enough for my application. Basically I would like to use the signal from hall sensor to fix several position (10 position) and make something similar to "stepper motor". Briefly, my goal is create a system in which I could start the system, then the system go to a "home position", and wait for "go to position 1" or "go to position 10", I would like to fix absolute position using the hall sensor that came with the motor (The rotational speed would be slow). Theoretically this seems to be easy but right now I'm stuck in how fix the absolute position. And probably my real question is: Using this motor and a limit switch with roller lever, is it possible to make this kind of application? Thanks a lot for read this post.
PS. In case you wonder why I omit other technical details like the board, driver for the motor, connections, is because I more interesting in the odds to make this application using this motor, or it would be better make an optical encoder with shaft disk that sense 10 position. To be honest I'm looking for the most parsimonious solution.
|
I have the following problem: given two Input Vectors $x = \begin{pmatrix}x\\y\\z\\v_x\\v_y\\v_z\\q_1\\q_2\\q_3\\q_4\end{pmatrix}$, $u = \begin{pmatrix}a_x\\a_y\\a_z\\w_x\\w_y\\w_z\end{pmatrix}$ time $t$ and gravity $g$ calculate $\dot{x}$. These vectors contain values recorded from an IMU.
for $\dot{x}$ I simply considered the motions of equation:
$\dot{x} = f(x,u) = \begin{pmatrix}a_x*t+v_x\\a_y*t+v_y\\a_z*t+v_z\\a_x\\a_y+g\\a_z\\\dot{q_1}\\\dot{q_2}\\\dot{q_3}\\\dot{q_4}\end{pmatrix}$.
Any $\dot{q_i}$ is calculated using a given Formula which is okay so far
Is my IMU-Model for $x$ and $\dot{x}$ correct?
I need to calculate the Jacobian $A=\frac{\partial \dot{x}}{\partial x} $.
I dont really get to calculate $A$. it has to be $A \in \mathbb{R}^{10x10}$.
But i always fail at calculating it, especially with respect to the quaternions.
My First 6 Lines look like that:
$\left[ {\begin{array}{cc}
0 & 0 & 0 & a_x & 0 & 0 & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & a_y & 0 & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & 0 & a_z & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & 0 & 0 & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & 0 & 0 & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & 0 & 0 & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & 0 & 0 & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & 0 & 0 & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & 0 & 0 & ? & ? & ? & ?\\
0 & 0 & 0 & 0 & 0 & 0 & ? & ? & ? & ?\\
\end{array} } \right] $
Thanks for the help!
:EDIT:
As i just found out, any derivative $\frac{\partial \dot{q}}{\partial q}=0$. So i guess its:
$A=\left[ {\begin{array}{cc}
0 & 0 & 0 & a_x & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & a_y & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & a_z & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\end{array} } \right]$ ?
|
I am seeking to collect data for my IB Physics IA (internal assessment). My question seeks to establish the relationship between speed and torque by adjusting gear ratios. (I am aware that gear ratios add mass and thus increase the stall torque and makes data less imprecise). I am using a Vex gear shaft, Vex motor, along with Vex gears and sprockets.
I am aware of the basics of torque: (Power = torque * angular velocity) and (Torque = Force * radius). Also, I am aware that: (Centripetal Force = Velocity squared * mass / radius).
I can easily calculate the tangential or angular velocity of the moving shaft, but need to measure the torque.
I have already researched possible methods for measuring that could do this:
Torque sensor or gauge - Very expensive but is the most precise of the devices I have already researched. Will not buy, above $100.
Stress gauge - Two problems: (1) it seems to be a device that will only work on shafts with large diameters - a Vex shaft is square with side length 3-4 mm. Would it even work?; (2) stress is very loosely relative to torque. It is cheap, however, is desirable if accurate.
Torque wrench - In theory, you would use it until the shaft no longer moves and you can find the torque that way. It is harmful to the motor (and thus it would no longer be a consistent control). I am unable to find a small torque wrench - can anyone find one that is precise enough. This would, in theory, be cheap. Could I make my own and how? - if this is a good idea.
Similiar to 3 - Resistance encoder. Not sure if these exist but might work well. Any thoughts?
Add mass to the end of the shaft until it stops - With the centripetal force equation (or it slightly modified) I can calculate the necessary tangential force required to rotate the shaft (thus the torque by multiplying by radius). (Note: I am not certain that using the centripetal force equation would be correct - is there a better equation?). Cheap, but perhaps less accurate.
Other ideas?
What would be the best method for measuring the torque of the shaft? As I am collecting data to analyze, accuracy is the highest preferred trait, although I do not want to spend a lot of money.
If necessary, I can use an Arduino to collect data from electronic devices.
Any help would be very appreciated. Thank you. Feel free to ask more specific questions.
|
Help me please resolve some questions in the article
1) "A Robust Method for Computing Vehicle Ego-motion", Gideon P. Stein et al.
According to this paper we can find three components of motion (one translation and two rotational components) using model from the article:
2) "The interpretation of a moving retinal image" Longuet-Higgins (1980).
As for me, the equation (1) from the first paper doesn't correspond to the equation from the second. There are velocities in the second paper that seems to be more correct than in the first where measurement units are not matched with left and right sides.
The second question is how can we find the motion parameters (translation and two rotations)? In the 1) paper they suggest to minimize cost function using gradient descent. But how can we do this? The function doesn't depend directly on motion variables $t_z, w_x, w_y $.
And the last question: if we got two subsequent frames $ I_1 $ at time $t_1$ and $I_2$ at time $t_2$, in what point of time $t$ we must consider estimated motion parameters? I guess at $(t_1 + t_2) / 2 $, but I'm not sure.
Help me please with these questions.
|
I've got an idea for a simple robot (with an Arduino or something similar) that will play noughts-and-crosses (aka tic-tac-toe) on a chalkboard, so that kids can play against it.
It will use a couple of servos to move an articulated arm holding a piece of chalk which will draw the O's and X's. Then the opponent will draw an O or X and press a button to tell the robot to make the next move.
The tricky bit is: how will the robot know where the opponent made their mark? I could use a camera and some sort of motion detection software, but that sounds complex.
I was wondering if there's a way to detect the touch of the chalk on the board, perhaps using something like a MTCH101 chip -- one per square on the board. The board will have a wooden wipe-clean surface, but chips could be embedded just under the surface.
Does anyone have any suggestions?
Edit 4 Jan 2018: Perhaps a force sensitive resistor would work -- such as http://www.trossenrobotics.com/store/p/6496-1-5-Inch-Force-Sensing-Resistor-FSR.aspx
|
I am unable to solve this problem. I tried to run catkin_make command in catkin_ws directory
yograj@yograj-Inspiron-5537:~/catkin_ws$ catkin_make
Base path: /home/yograj/catkin_ws
Source space: /home/yograj/catkin_ws/src
Build space: /home/yograj/catkin_ws/build
Devel space: /home/yograj/catkin_ws/devel
Install space: /home/yograj/catkin_ws/install
####
#### Running command: "cmake /home/yograj/catkin_ws/src -DCATKIN_DEVEL_PREFIX=/home/yograj/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/home/yograj/catkin_ws/install -G Unix Makefiles" in "/home/yograj/catkin_ws/build"
####
-- Using CATKIN_DEVEL_PREFIX: /home/yograj/catkin_ws/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/kinetic
-- This workspace overlays: /opt/ros/kinetic
-- Using PYTHON_EXECUTABLE: /home/yograj/anaconda2/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/yograj/catkin_ws/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
ImportError: "from catkin_pkg.package import parse_package" failed: No module named catkin_pkg.package
Make sure that you have installed "catkin_pkg", it is up to date and on the PYTHONPATH.
CMake Error at /opt/ros/kinetic/share/catkin/cmake/safe_execute_process.cmake:11 (message):
execute_process(/home/yograj/anaconda2/bin/python
"/opt/ros/kinetic/share/catkin/cmake/parse_package_xml.py"
"/opt/ros/kinetic/share/catkin/cmake/../package.xml"
"/home/yograj/catkin_ws/build/catkin/catkin_generated/version/package.cmake")
returned error code 1
Call Stack (most recent call first):
/opt/ros/kinetic/share/catkin/cmake/catkin_package_xml.cmake:74 (safe_execute_process)
/opt/ros/kinetic/share/catkin/cmake/all.cmake:151 (_catkin_package_xml)
/opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:20 (include)
CMakeLists.txt:52 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/yograj/catkin_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/yograj/catkin_ws/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed
|
From figure above, suppose a three-dimensional vector $p(0)$ is rotated by an angle $\theta$ about the fixed rotation axis $\omega$ to $p(\theta)$. Here we assume all quantities are expressed fixed frame coordinates and $\Vert \omega \Vert = 1$. This rotation can be achieved by imagining that $p(0)$ is subject to a rotation about $\omega$ at a constant rate of $1$ rad/sec, from time $t = 0$ to $t = \theta$. Let $p(t)$ denote this path. The velocity of $p(t)$, denoted $\dot{p}$, is then given by
$$
\begin{equation}
\dot{p} = \omega \times p
\end{equation}
$$
Please explain in detail as to how to derive the above equation. Thank you.
|
I am planning a project which compares the different techniques of 3D mapping of a setup. I understand that there are many algorithms to accomplish the same, but due to a non-existent hardware budget, I have to compare 3D techniques/algorithms that can be accomplished with the algorithm and datasets only. I have done a fair share of research that is still ongoing, but I was looking for assistance by people who may already have knowledge of this and can help me narrow down what I'm searching.
In summary, I am looking for 3D mapping techniques/algorithms that can be accomplished using only software and datasets in order to compare them on various performance and application parameters. Since I have to compare performance, all techniques should be applied on the same dataset.
I would appreciate any information, research papers, links, otherwise that could direct me.
|
Can anyone please explain what is a publisher and a subscriber when we talk about ROS?
Where is which of them installed and what do they do? Any simple example is appreciated.
|
According to this video from Khan Academy about electronics of a DVD drive the position of the sled (which moves the reading head) is controlled by a geared DC motor by using only timing and one end-stop switch.
I've also took apart some DVD drives and where there was a DC motor (instead of a stepper) I did not find any sensor as a feedback for closed loop control.
I could not find any reference for this online. If this is really the case can anybody point me to a page/site with the explanation? Why this is not used in hobby projects?
Edit
After the feedbacks I would focus on one specific question: is it possible that a DVD drive can operate with open-loop control and a DC motor?
|
I followed the instruction as mentioned here. Everything worked fine but at last stage when I try to run:-
roslaunch px4 multi_uav_mavros_sitl.launch
I get following:-
... logging to /home/yograj/.ros/log/40d4394e-0a3d-11e8-a1f1-3c77e68de09b/roslaunch-yograj-Inspiron-5537-12334.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
Usage: xacro.py [options] <input>
xacro.py: error: expected exactly one input file as argument
while processing /home/yograj/src/Firmware/launch/single_vehcile_spawn.launch:
Invalid <param> tag: Cannot load command parameter [iris_1_sdf]: command [/opt/ros/kinetic/share/xacro/xacro.py /home/yograj/src/Firmware/Tools/sitl_gazebo/models/rotors_description/urdf/iris_base.xacro rotors_description_dir:=/home/yograj/src/Firmware/Tools/sitl_gazebo/models/rotors_description mavlink_udp_port:=14560 > iris_1.urdf ; 'gz sdf -p iris_1.urdf'] returned with code [2].
Param xml is <param command="$(arg cmd)" name="$(arg vehicle)_$(arg ID)_sdf"/>
The traceback for the exception was written to the log file
Launch file
<launch>
<!-- Posix SITL environment launch script -->
<arg name="x" default="0"/>
<arg name="y" default="0"/>
<arg name="z" default="0"/>
<arg name="R" default="0"/>
<arg name="P" default="0"/>
<arg name="Y" default="0"/>
<arg name="est" default="ekf2"/>
<arg name="vehicle" default="iris"/>
<arg name="ID" default="1"/>
<arg name="rcS" default="$(find px4)/posix-configs/SITL/init/$(arg est)/$(arg vehicle)_$(arg ID)"/>
<arg name="mavlink_udp_port" default="14560" />
<arg name="cmd" default="$(find xacro)/xacro.py $(find px4)/Tools/sitl_gazebo/models/rotors_description/urdf/$(arg vehicle)_base.xacro rotors_description_dir:=$(find px4)/Tools/sitl_gazebo/models/rotors_description mavlink_udp_port:=$(arg mavlink_udp_port) > $(arg vehicle)_$(arg ID).urdf ; 'gz sdf -p $(arg vehicle)_$(arg ID).urdf'" />
<param command="$(arg cmd)" name="$(arg vehicle)_$(arg ID)_sdf" />
<node name="sitl_$(arg ID)" pkg="px4" type="px4" output="screen"
args="$(find px4) $(arg rcS)">
</node>
<node name="$(arg vehicle)_$(arg ID)_spawn" output="screen" pkg="gazebo_ros" type="spawn_model"
args="-sdf -param $(arg vehicle)_$(arg ID)_sdf -model $(arg vehicle)_$(arg ID) -x $(arg x) -y $(arg y) -z $(arg z) -R $(arg R) -P $(arg P) -Y $(arg Y)" respawn="false"/>
</launch>
<!-- vim: set et ft=xml fenc=utf-8 ff=unix sts=0 sw=4 ts=4 : -->
|
I'm tracking the state of a robot using an EKF defined by:
$$(x,y,\theta)$$
where $x$ and $y$ are the coordinates in the ground-plane and $\theta$ the heading angle.
I initialized the covariance matrix $P$ to the following values:
$$P
=
\begin{bmatrix}
\sigma_{xx}^2 & 0 & 0 \\
0 & \sigma_{yy}^2 & 0 \\
0 & 0 & \sigma_{\theta \theta}^2 \\
\end{bmatrix}
$$
where $\sigma_{xx}=1$, $\sigma_{yy}=1$ and $\sigma_{\theta \theta }=0.1$
In the system i'm using, valid measurements are rearely obtained so the predicted covariance matrix becomes very big.
My question concerns the angle uncertainty, the angle is predicted from angle increments $\Delta_{theta}$ so, at each time stamp $k$:
$$\theta=\theta+\Delta_{theta_{k}}$$
We wrap the value of the obtained angle to make $\theta\in[-\pi,\pi]$.
Does the uncertainty have to be also wrapped to the interval $[-\pi,\pi]$? means that making $\sigma_{\theta }\in[-\pi,\pi]$?
|
I am building a robot that uses an Intel NUC (running ArchLinux) as its main processing unit to run some machine learning and ROS programs scripts. One of the requirements for the robot is that it must fully turn off and on by a single main power switch. By doing so, it cuts power to the entire robot, including the NUC. I am worried that doing so will damage the NUC. Is there any safe way to shut down a computer without having to turn it off via software.
**One more requirement for the robot is that its electronics are sealed in in an airtight box, therefore the NUC cannot be accessed to turn off via the main power button.
|
I am currently working on a project that requires me to obtain depth information and point cloud map of objects in an indoor environment ( roughly twice the size of a normal room). I would like to know -
Would a set of >=2 stereo cameras (manually calibrated wireless sricam SP019 IP cameras) be a good choice to produce a map for the entire room ? I will be using OpenCV 3.3.1 for the calibration and depth mapping.
If I wanted to increase the range of my stereo pair, would I have to -
a) increase the baseline distance and keep them parallel ? There is some
discussion here about the baseline and it's effect on accuracy.
However, there is no mention about the alignment of the two cameras.
b) Align them in a tilted fashion to face each other ? Would this
be bad for Image rectification ?
What sort of min-max Range can be achieved using >=2 wireless IP cameras in a stereo-vision setup ?
I would like to evaluate accuracy of depth measurements obtained from the IP cameras. I am planning to use Kinect depth images (for short ranges 0.8-5 metres) as a reference.
What are better commercial solutions for obtaining depth map for longer ranges ? I have seen a few discussions on StereoLabs Zed cameras (see here and here) but none that are convincing enough to make me buy it, as it is quite expensive.
I am aware similar questions like this have been asked but it did not contain any discussion on IP cameras.
I am interested in the personal experience of people who may have done something similar.
Thank you very much for reading.
|
How can I map a 3D environment using only a 2D lidar? The lidar would be hand held and it would have 6 DoF. So if I move it in arbitrary motion in all 6DoF, I expect my algorithm to generate a 3D map of whatever part of the environment was visible to the lidar.
I've come across ways to use IMU, monocular camera or encoder motor for state estimation that is required for mapping, but I want a solution that doesn't require any other sensor except a 2D lidar like RPLidar.
|
The paper A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles by Alonzo Kelly uses the term actuation space.
Feedforward solves the historically significant clothoid generation problem trivially. C space combinatoric explosion is solved by planning in actuation space.
I have attempted to search for this term via Google. However, I can only find applications of actuation spaces—not what an actual space actually is.
A definition or source for further investigation would be greatly appreciated.
|
As stated in Probabalistic Robotics, the proof for correctness of a Bayesian Filter relies on the fact that
$$p(x_{t-1}|z_{1:t-1},\ u_{1:t}) = p(x_{t-1}|z_{1:t-1},\ u_{1:t-1})$$
In order to justify this, they say
$u_t$ can be safely omitted ... for randomly chosen controls
Why is that required? Isn't that true because the control input at time step t cannot possibly effect the state at time t-1?
|
I am trying to make a quadcopter that receives commands from my android mobile phone. I currently have the following:
Raspberry Pi 3 Model B running on Raspbian.
Pi Camera
Seriously Pro F3 Flight Controller
I am planning on using bluetooth to send commands to my raspberry pi, but I am unable to figure out how to send commands from the raspberry pi to the flight controller. I am thus asking, how do I send commands (those of movement and throttle) from my raspberry pi to the flight controller.
Link to Flight Controller Documentation:
http://seriouslypro.com/files/SPRacingF3-Manual-latest.pdf
This is the project I am trying to make:
http://www.instructables.com/id/The-Drone-Pi/
This project is made with a MultiWii flight controller, while I want to make it with a SPF3 Flight Controller, I don't know how to approach accomplishing it.
I'm asking a similar question:
Quadcopter controlled by Raspberry Pi
but with a SPF3 Flight controller rather than CC3D Flight Controller and with a little more explanation of terms.
|
Please help me to solve my problem:
This is the transfer function: $G(s)=10\frac{(s^2+1)(s-10)}{s(s+0.1)(s^2+2s+100)}$
I calculated properly the whole Bode diagram apart from the phase of the complex zeros $(s^2+1)$. I would sum here at this point as usually: +180° and so does Wolfram's software in fact my result of the phase is equal to this one:
the problem is that my professor uses MATLAB which gives -180° and therefore this should be the right solution
I don't know which one is right and why, that's why I bother.
|
As SE(3) does not have any bi-invariant metric, what are the available pseudo metrics for comparing two homogeneous matrices for rigid body motions ? In other words, how to compare two dual quaternions ?
|
I'm apart of a group at my University where we would like to build a drone/UAV and constantly add and modify parts on it.
We were thinking of using an Arduino or raspberry pi, but we really don't want to mess with PID loops. However, with these microcontrollers we can easily add multiple sensors and a GPS. We can build the frame, do soldering, and everything else, but we would like some type of pre-made PID loop or flight controller so we don't need to worry about the drone falling from the sky.
Would anybody have any recommendations for us? Maybe flight controllers that we can add anything else on top of, and then change the code. Or a pre-built drone where we can modify its code and add sensors.
Any help is extremely appreciated. Thanks!
|
We have a tunnel and the distance from the walls to the robot is about 30cm when the robot is centered, the tunnel curves to the right. The robot has an infrared sensor on its left and one on its right and has an ultrasonic sensor at its front (so basically its eyes). How can I make the robot turn appropriately while its moving so that it's centered in the tunnel. I just need an idea for this algorithm because I can't find any. All what the infrared sensors can do is measure the distance to the walls.
|
I am interested to know the basic principle of the "Delayed KF" when considering an underwater robot aiming to localize it self using a LBL system.
More practically, in order to calculate the distance to each anchor, the robot transmits a PING to and waits for the RESPONSE from the anchor. This is time consuming as the sound velocity is very low (1500m/s). Lets say it will take order of 5/10 seconds to complete such iteration (large distances).
If the robot is static (during this operation, or the anchors are very close), I can recover the two travel time, TWTT, and after that the distance (knowing the sound velocity, divide by two etc). In that case i can include the range to my EKF navigation filter.
Assuming that the robots moves (1 m/s) this assumption does not hold anymore as I can not divide the TWTT by a factor of 2.
Searching online i found talking about the delayed EKF in which (in my understanding) having a measurements timestamped (the real time and the time when i got it) i re-run the filter at the right time when the measurements was/or should have been done.
Does this hold also for this case and how can be integrated with the navigation filter of the robot?
Geometrically to find the right distance based on the TWTT and considering the movement done during that time seems hard to me to model. At least i have not found any paper in the literature.
Regards,
|
I have the following setup:
RC car chassis which can turn, go back and forward; both the turn rate and the acceleration can be controlled smoothly
Raspberry Pi 3
Two cameras so that depth map can be calculated on the fly for obstacle avoidance
Several microphones so that direction of the voice can be determined
Intertial measurement unit to keep heading at the direction of the voice
Several infrared distance sensors
What I want to acompilish is a robot that will drive towards me when called. Voice recognition, structure from motion and handling the IMU are not a problem for me because I've done it before. What I find really problematic is how do I actively avoid obstacles, while trying to follow the voice? I assume that I can call the robot multiple times - i.e. when it gets confused. The robot will stop when a face is recognized from one of the cameras, therefore the distance to target is initially unknown. In theory I could try to create a 3d point cloud using structure from motion and determine the robot's location within the pre-built map and use it for navigation, however as far as I know, the determination of robot location within the map is a costly and uncertain computation (especially when its initial position is unknown).
What I would like to learn is - are there some algorithms that allow the robot to stick to given heading while avoiding obstacles, but without a map? I would be very grateful for scientific papers, blog posts or simply algorithm names if there are any avalible.
I recently read about bug algorithms - especially dist bug. This sounds like most promising solution for me so far.
|
The figure shows an articulated arm kinematic model having 3 degrees of
freedom.**Theta** represents **joint angle**, **d** represents **joint
distance**, **alpha** represents **link twist angle** and **a** represents
**link length**
|
which distribution of ROS should I install on Ubuntu 17
I have searched in ros distributions wiki but couldn't find any specification
and I also found that ROS Kinetic Kame is not suitable for ubuntu 17
|
I am a masters student specializing in Robotics. I have a bachelors degree in Mechanical engineering and hence a bit sloppy with programming languages. MATLAB is one thing I am proficient in. But robotics requires extensive programming, especially in C++ and Python.
From a career point of view, software development is one of the major skills to have. I have started learning C++ and OOP concepts mainly from EdX. However, I do not know how much to learn and what exactly to be proficient in terms of coding. I have talked to various professors and peers who suggest me take courses in programming and take online challenges in Hackerrank and Leetcode. I have looked at many of these sources and most of them approach coding from a software engineering point of view. I am not sure if that is the right way to learn to code from a robotics perspective.
I would really appreciate some pointers and directions on how to be proficient in software development for robotics. Working on projects is one thing and I have already begun working on ROS-Indigo. However, I would like a solid foundation in software development rather than half-baked knowledge.
The main thing that confuses me there are too many sources and I am not able to pin down which is the best source to learn and exactly how much to learn. From a roboticist career point of view, some guidance in this aspect would be really helpful. Thanks.
|
Our Robotics Team (Go TuxedoPandas! ordered a mix of regular and "Increased Rotation" servos from Servo city.
However,it appears only the outside packaging is marked. And only with a sticker.
Is there any visual indication on the servo that it is Increased Rotation?
|
I want to check for singular configurations of a 7-dof robotic arm (RRRRRRR). I have found the geometric Jacobian and it is a 6x7 matrix. If my theoretical background is solid when the Jacobian loses a rank (in this case <6) the robot is at a singularity.
So one should check when the determinant of the Jacobian equals zero.In this case since it is not a square matrix i multiplied the Jacobian by its transpose and calculated that determinant in matlab.
My problem is that the elements of the matrix,well those that are not equal to zero or 1, are terrible to look at.I mean, you can see a lot of cosine and sine terms which make the computation really complicated.
Is there a way to make things easier?
I appreciate all the help,
Victor
|
I want to learn ros-kinetic from a virtual machine. I found that ros-indigo is available from nootrix; similarly, is there a virtual machine with ros-kinetic version available anywhere?
|
Suppose I've developed a monocular SLAM or odometry algorithm. Then I want to test it on some dataset, for example KITTI, TUM etc. How should I deal with the absolute scale in this way? Thanks.
|
I am new to the field of control systems, PID and robotics and I want to enhance my knowledge in the field. Can you recommend me books where I can start learning about control system engineering (with mathematical and practical examples) so I can finally finish my project. (I am working on quad copter and I can find all codes online but can't understand it because some people here told me that I should start reading on these subjects first).
|
I'm looking to use 3-phase motor without gearbox for robotics application. It will always work at near stall situation never rotating more than 360deg. I'm looking for arrangement that will produce max stall torque while drawing min current.
I tried both gimbal (high resistance) and BLDC (low resistance) motors and yes I could achieve high torque, but only at cost of too high current and overheating.
My question is both practical and theoretical: what motor or mode of operation should I choose to maximize stall torque? and what in principle can be done? Imagine I had my motor factory - how would I approach building stall torque motor?
|
Are there specific places/websites where you guys read about the topics that are being searched in Robotics, specially, for the most newest researches. I would like to know the places where the most hottest topics in Robotics are studied for both theoretical and experimental studies. The only place I know is IEEE community. They are doing great specially their magazine but I'm curious if there are any alternatives for robotics scientists. Please include journals.
|
I know this might sound like a stupid question, but I actually need help with this. So, I'm looking at this motor: https://www.servocity.com/2737-rpm-premium-planetary-gear-motor.
I have a raspberry pi, and was wondering how exactly I was supposed to control this. I know that I need a motor controller to drive this thing at a reasonable speed, but I need someone to point me in the right direction as to what type of motor controller I need. Thanks!
|
I am currently working on collision avoidance using Potential fields and I have visualized the same in winfom.
Now, I need to test the algorithm in predefined trajectory say a spline(or in a lane).
My question is:
Is it possible to do collision avoidance considering potential field method in a predefined trajectory?
Say the model car avoids the obstacle and returns back on its trajectory to reach the goal, created in windows form.
If, so what are the parameters and contraints should be considered?
|
I am attempting to minimize the energy consumption used by an electric-motor-powered car given the speed restrictions:
$$\text{speed}: [v_{\text{min}},v_{\text{max}}]$$
Currently, we are utilizing a bang-bang controller where we tell the motor to run at full voltage until $v_\text{max}$ is reached. After this, the motor is cut. Once the speed approaches $v_\text{min}$ due to energy losses in the system, we will repeat this cycle.
We are striving to achieve the least energy consumption per unit distance. Is it worth while to implement some motion profiling system, or would this just increase energy consumption?
|
I recently built a quadcopter and programmed a code to stabilise itself.
The copter takes the hight from a distancesensors and the angle from a MPU 6050 gyro and accelerometer with a complementary filter.
I currently face two Problems:
Altough the MPU 6050 is mounted to a vibrationabsorbing platform the angle is really bad. Some ideas to lessen the vibrations would be good.
The program never worked satisfactionally: I check every Milisecond, if the Copter is stabilised and at the right height or not. If not it spinns up the motors to correct. This should start the drone and let it hover at a certain height.
However, is the drone starting up well but shortly before take-off one side tilts up at an angle of apprximately 45° and the drone drifts of. The program could never compensate this.
Have you an idea? Or some other thoughts. (This is my first electrical Project so very basic stuff could be wrong)
Thanks for your help.
|
I'm using an MPU6050 for a custom flight controller and I am facing an issue when reading the values from the IMU. The data shown below represents the angular velocity in the y-axis. When the angular velocity is increasing or decreasing the overall trend seems to be increasing/decreasing but there are random drops to 0 degrees / second. The end goal is to read from the accelerometer and gyroscope and calculate the current angular position of the device but the inconsistent data may produce garbage angles.
The data below is taken when I am rotating the MPU6050 about its x-axis.
Is this behaviour normal for a gyroscope or am I doing something wrong. If this is normal, is there a filter that I can apply to get more accurate values?
MPU6050 Datasheet: https://www.invensense.com/wp-content/uploads/2015/02/MPU-6000-Datasheet1.pdf
MPU6050 Register Map: https://www.invensense.com/wp-content/uploads/2015/02/MPU-6000-Register-Map1.pdf
Specs:
Clock speed: 16MHz internal
Communication: I2C (100KHz)
MCU: Nucleo 64 STM32f446RE
Data:
Sample of Data
# DPS
223 -6.442748
224 -2.076336
225 0.732824
226 1.664122
227 1.832061
228 -1.740458
229 -0.549618
230 5.679389
231 8.351145
232 -0.442748
233 12.045801
234 15.78626
235 19.725191
236 28.106871
237 -0.473282
238 31.709925
239 31.450382
240 -0.70229
241 -1.526718
242 29.007633
243 -1.862595
244 32.152672
245 -0.580153
246 -1.725191
247 40.229008
248 -0.687023
249 -1.587786
250 48.580154
251 51.61832
252 -0.381679
253 -0.183206
254 63.343513
Code:
while (1){
MPU_config();
MPU_getData();
}
void MPU_config(){
//turns off SLEEP and CYCLE mode (register 0x6B)
i2c_txData = 0x0;
HAL_I2C_Mem_Write(&hi2c1,i2c_devAddr,
(uint16_t)PWR_MGMT_1,1,&i2c_txData,1,HAL_MAX_DELAY);
// Gyro Configuration register (0x1B)
i2c_txData = 0x08; // fullscale +-500dps (0x08)
HAL_I2C_Mem_Write(&hi2c1,i2c_devAddr,
(uint16_t)GYRO_CONFIG,1,&i2c_txData,1,HAL_MAX_DELAY);
// Accelerometer Configuration (0x1C)
i2c_txData = 0x08; // fullscale +-4g
HAL_I2C_Mem_Write(&hi2c1,i2c_devAddr,
(uint16_t)ACCEL_CONFIG,1,&i2c_txData,1,HAL_MAX_DELAY);
}
void MPU_getData(){
uint8_t ugyrox_h=0, ugyrox_l=0;
signed char gyrox_h=0, gyrox_l=0;
//0x0064 (100) random delay value
HAL_I2C_Mem_Read(&hi2c1,i2c_devAddr|0x1,GYRO_XOUT_H,1,&ugyrox_h,1,0x0064);
HAL_I2C_Mem_Read(&hi2c1,i2c_devAddr|0x1,GYRO_XOUT_L,1,&ugyrox_l,1,0x0064);
// gyro data in MPU is in 2s complement
gyrox_h = (signed char) ugyrox_h;
gyrox_l = (signed char) ugyrox_l;
//gyro_500 = 1.0f/65.5f (gyro_sensitivity: 500dps)
gyroData = ((float) ((gyrox_h<<8) | gyrox_l))*gyro_500;
printf("%x \t %x \t %f \n ",gyrox_h,gyrox_l,gyroData);
}
Gyro Configuration
FS_SEL = 1, for 500dps (page 14 of register map PDF)
Sensitivity factor is 65.5, "gyro_500" is 1/65.5 in the code (page 12 of datasheet)
|
there is a lot of information out there on how to tune quad PID parameters. However, they mostly regard only the master loop that takes user input and fused absolute angles. How and in which order do I tune my cascaded PID loops? Do I first do the rate loop? How can I tell if it's tuned correctly? I can cause it to spin the drone 2 times a second around itself, and that with low throttle. It stops quickly when I reset the stick position.
I feel stupid because I cannot tune the controller I built myself, but maybe you can explain it to me.
|
I am going through different texts (Spong Robot Modeling and Control, Murray Mathematical Introduction to Robotic Manipulations) and I am seeing different Jacobians developed for the same RRRP Manipulator (the SCARA).
My misunderstanding is similar as presented in this question . Overall, I am trying to develop the forward kinematics and Jacobian for the SCARA robot using exponentials of twists to test my knowledge but keep coming to different conclusions. I have found in Peter Corke's Robotics, Vision and Control this statement:
...compared to the Jacobian of Sect.8.1, these Jacobians give the velocity of the end effector as a velocity twist, not a spatial velocity as defined on page 65. To obtain the Jacobian as described in Sect.8.1, we must apply the transformation...
$J^0 = \begin{bmatrix} I_{3x3} & -[t_{E}^0]_x \\ 0_{3x3} & I_{3x3} \end{bmatrix} J^V_0$
I can get to Murrays:
But I am trying to get to Spong's Jacobian:
Any help/clarification would be greatly appreciated!
|
I have a sensor (3 axis gyroscope) which can rotate and measure angular velocity in 3 dimensions (aligned with the sensor).
I know what its current orientation is with respect to the world frame. Call this quaternion qs.
I take a reading from my gyroscope and integrate it to give me a rotation in the sensor frame. Call this quaternion qr.
I now want to apply the rotation qr to the current orientation qs to obtain the new orientation, qs'.
But I cannot use qr directly as it describes a rotation in the sensor body frame.
I need to transform my rotation quaternion into the world frame, and then I could just apply it to the orientation i.e. qs' = qs * qr_world.
But I am really struggling to understand how I can perform this transformation qr -> qr_world.
Does this even make sense? I wonder if I have fundamentally misunderstood some concepts here. If it does make sense, then I am specifically interested in understanding how to do this using quaternion operations (if that is possible) rather than rotation matrices or euler angles.
|
I am setting up a Gazebo model for use with the navigation stack. I have been reading the Navigation Tuning Guide and am confused about the lidar data in the odom frame. I would think that the tuning guide, when it says:
"The first test checks how reasonable the odometry is for rotation. I open up rviz, set the frame to "odom," display the laser scan the robot provides, set the decay time on that topic high (something like 20 seconds), and perform an in-place rotation. Then, I look at how closely the scans match each other on subsequent rotations. Ideally, the scans will fall right on top of each other, but some rotational drift is expected, so I just make sure that the scans aren't off by more than a degree or two. (Nav Stack Tuning)"
it means that the lidar data is supposed to be in approximately the same place before, during, and after the rotation. I have been reading and it seems that these sweeping swirls that I see are correct? I have been trying to use gmapping in my simulation and whenever I rotate the map gets horridly disfigured - I believe that odometry is to blame.
I have recorded what the lidar data looks like in the odom frame. Is this correct or should it look differently?
I followed this tutorial to build the initial model and simulate it. I created a (visually) crude model with two wheels (left and right) that move and two frictionless casters (front and back) using their general framework. I changed the shape of the robot but just followed their procedure and tried to reproduce it. I have tried to flip the x rotation for the left and right wheels from -pi/2 to pi/2 and that just reversed the direction of motion, which I expected, but does not change the issue of streaky lidar from the odom frame. I am puzzled because the straight odometry data keeps the laser scans in the same position (as one would expect) but when I rotate the robot I get the streaks. I don't really know the mechanism behind calculating the odometry data in Gazebo so I am stuck as to fixing this issue.
The urdf file is shown:
<?xml version="1.0"?>
<robot name="bender" xmlns:xacro="http://ros.org/wiki/xacro">
<xacro:property name="pi" value="3.141592653589794" />
<xacro:property name="base_len" value="0.89" />
<xacro:property name="base_wid" value="0.80" />
<xacro:property name="base_height" value="0.20" />
<xacro:property name="caster_length" value="0.10" />
<xacro:property name="caster_radius" value="0.15" />
<xacro:property name="wheel_length" value="0.10" />
<xacro:property name="wheel_radius" value="0.15" />
<xacro:property name="update_rate" value="50"/>
<xacro:property name="hokuyo_size" value="0.05"/>
<xacro:macro name="default_inertial" params="mass">
<inertial>
<mass value="${mass}" />
<inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="1.0" iyz="0.0" izz="1.0" />
</inertial>
</xacro:macro>
<material name="white">
<color rgba="1 1 1 1.5"/>
</material>
<link name="base_link">
<visual>
<geometry>
<box size="${base_len} ${base_wid} ${base_height}"/>
</geometry>
<material name="white"/>
</visual>
<collision>
<geometry>
<box size="${base_len} ${base_wid} ${base_height}"/>
</geometry>
</collision>
<xacro:default_inertial mass="1"/>
</link>
<xacro:macro name="caster" params="position reflect">
<joint name="${position}_wheel_joint" type="fixed">
<parent link="base_link"/>
<child link="${position}_wheel"/>
<axis xyz="0 0 1"/>
<origin xyz="${reflect*(base_wid/2)} 0 ${-1 * base_height}" rpy="${pi/2} 0 0"/>
</joint>
<link name="${position}_wheel">
<visual>
<geometry>
<sphere radius="${caster_radius}"/>
</geometry>
<material name="white"/>
</visual>
<collision>
<geometry>
<sphere radius="${caster_radius}"/>
</geometry>
</collision>
<xacro:default_inertial mass="0.5"/>
</link>
<!-- This block provides the simulator (Gazebo) with information on a few additional
physical properties. See http://gazebosim.org/tutorials/?tut=ros_urdf for more-->
<gazebo reference="${position}_wheel">
<mu1 value = "0.0"/>
<mu2 value = "0.0"/>
<kp value = "10000000.0"/>
<kd value = "1.0"/>
<material>Gazebo/Grey/</material>
</gazebo>
</xacro:macro>
<xacro:macro name="wheel" params="position reflect">
<joint name="${position}_wheel_joint" type="continuous">
<parent link="base_link"/>
<child link="${position}_wheel"/>
<axis xyz="0 0 1"/>
<origin xyz="0 ${reflect*(base_wid/2)} ${-1*base_height}" rpy="${-pi/2} 0 0"/>
</joint>
<link name="${position}_wheel">
<visual>
<geometry>
<cylinder length = "${wheel_length}" radius="${wheel_radius}"/>
</geometry>
<material name="white"/>
</visual>
<collision>
<geometry>
<cylinder length = "${wheel_length}" radius="${wheel_radius}"/>
</geometry>
</collision>
<xacro:default_inertial mass="0.5"/>
</link>
<!-- This block provides the simulator (Gazebo) with information on a few additional
physical properties. See http://gazebosim.org/tutorials/?tut=ros_urdf for more-->
<gazebo reference="${position}_wheel">
<mu1 value = "200.0"/>
<mu2 value = "100.0"/>
<kp value="1000000.0" />
<kd value="1.0" />
<fdir1 value="1 0 0"/>
<material>Gazebo/Grey/</material>
</gazebo>
<!-- This block connects the wheel joint to an actuator (motor), which informs both
simulation and visualization of the robot -->
<transmission name="${position}_wheel_trans">
<type>transmission_interface/SimpleTransmission</type>
<actuator name="${position}_wheel_motor">
<mechanicalReduction>1</mechanicalReduction>
</actuator>
<joint name="${position}_wheel_joint">
<hardwareInterface>hardware_interface/VelocityJointInterface</hardwareInterface>
</joint>
</transmission>
</xacro:macro>
<!-- Creating the actual robot -->
<xacro:caster position="front" reflect="1"/>
<xacro:caster position="back" reflect="-1"/>
<xacro:wheel position="right" reflect="1"/>
<xacro:wheel position="left" reflect="-1"/>
<!-- Gazebo plugin for ROS Control -->
<gazebo>
<plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so">
<robotNamespace>/</robotNamespace>
</plugin>
</gazebo>
<!-- Hokuyo Laser Senso -->
<link name="hokuyo_link">
<visual>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<box size="${hokuyo_size} ${hokuyo_size} ${1.5*hokuyo_size}"/>
</geometry>
<material name="Blue" />
</visual>
</link>
<joint name="hokuyo_joint" type="fixed">
<origin xyz="${base_wid/2} 0 ${base_height+hokuyo_size/4}" rpy="0 0 0" />
<parent link="base_link"/>
<child link="hokuyo_link" />
</joint>
<gazebo reference="hokuyo_link">
<material>Gazebo/Blue</material>
<turnGravityOff>false</turnGravityOff>
<sensor type="ray" name="head_hokuyo_sensor">
<pose>${hokuyo_size/2} 0 0 0 0 0</pose>
<visualize>false</visualize>
<update_rate>40</update_rate>
<ray>
<scan>
<horizontal>
<samples>720</samples>
<resolution>1</resolution>
<min_angle>-1.570796</min_angle>
<max_angle>1.570796</max_angle>
</horizontal>
</scan>
<range>
<min>0.10</min>
<max>10.0</max>
<resolution>0.001</resolution>
</range>
</ray>
<plugin name="gazebo_ros_head_hokuyo_controller" filename="libgazebo_ros_laser.so">
<topicName>/scan</topicName>
<frameName>hokuyo_link</frameName>
</plugin>
</sensor>
</gazebo>
</robot>
I set up my differential drive controller as shown in the tutorial, with my own values, here:
type: "diff_drive_controller/DiffDriveController"
publish_rate: 50
left_wheel: ['left_wheel_joint']
right_wheel: ['right_wheel_joint']
# Odometry covariances for the encoder output of the robot. These values should
# be tuned to your robot's sample odometry data, but these values are a good place
# to start
pose_covariance_diagonal : [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0]
twist_covariance_diagonal: [0.001, 0.001, 1000000.0, 1000000.0, 1000000.0, 1000.0]
# Wheel separation and radius multipliers
wheel_separation_multiplier: 1.0 # default: 1.0
wheel_radius_multiplier : 1.0 # default: 1.0
# Top level frame (link) of the robot description
base_frame_id: base_link
# Velocity and acceleration limits for the robot
# default min = -max
linear:
x:
has_velocity_limits : true
max_velocity : 5.0 # m/s
has_acceleration_limits: true
max_acceleration : 0.6 # m/s^2
angular:
z:
has_velocity_limits : true
max_velocity : 1.0 # rad/s
has_acceleration_limits: true
max_acceleration : 0.5 # rad/s^2
has_jerk_limits : true
max_jerk : 2.5 # rad/s^3
I leave the wheel_separation and wheel_radius out because I wish for them to be generated automatically. I have put in different values, including what I think they are, and it does not change the streaking in lidar data when viewed from the odom frame.
I have also posed the question here.
Edit:
I have compiled and ran code from the Book ROS Robotics Projects. Chapter 9 code has a simulated robot. I put fixed frame to odom and when I move forward there is 'streaking' in the x direction but not when I rotate. This makes sense to me but I am wondering why my simulated robot does not behave like this.
|
I am using ros kinetic in ubuntu 16.04. My first error is:
Traceback (most recent call last):
File "/opt/ros/kinetic/share/ros/core/rosbuild/bin/check_same_directories.py", line 46, in <module>
raise Exception
Exception
CMake Error at /opt/ros/kinetic/share/ros/core/rosbuild/private.cmake:102 (message):
[rosbuild] rospack found package "controller-1" at "", but the current
directory is "/home/robot/rtt-exercises-2.7.0/controller-1". You should
double-check your ROS_PACKAGE_PATH to ensure that packages are found in the
correct precedence order.
Call Stack (most recent call first):
/opt/ros/kinetic/share/ros/core/rosbuild/public.cmake:177 (_rosbuild_check_package_location)
CMakeLists.txt:24 (rosbuild_init)
-- Configuring incomplete, errors occurred!
See also "/home/robot/rtt-exercises-2.7.0/controller-1/build/CMakeFiles/CMakeOutput.log".
So when i enter command " export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:~/rtt-example/rtt_examples-rtt-2.0-examples/rtt-exercises/controller-1 " than i'm getting below error:
Failed to invoke /opt/ros/kinetic/bin/rospack deps-manifests controller-1
[rospack] Error: package 'controller-1' depends on non-existent package 'rtt' and rosdep claims that it is not a system dependency. Check the ROS_PACKAGE_PATH or try calling 'rosdep update'
CMake Error at /opt/ros/kinetic/share/ros/core/rosbuild/public.cmake:129 (message):
Failed to invoke rospack to get compile flags for package 'controller-1'.
Look above for errors from rospack itself. Aborting. Please fix the
broken dependency!
Call Stack (most recent call first):
/opt/ros/kinetic/share/ros/core/rosbuild/public.cmake:207 (rosbuild_invoke_rospack)
CMakeLists.txt:24 (rosbuild_init)
-- Configuring incomplete, errors occurred!
See also "/home/robot/rtt-exercises-2.7.0/controller-1/build/CMakeFiles/CMakeOutput.log"
I have already installed the rtt-ros-integration package and orocos-toolchain package from source. And when i enter command " source ~/ws/underlay_isolated/installd_isolated/setup.sh " it again repeates the first error. So how should i solve this error. I was practicing the rtt-example package tutorial to learn orocos basics. This package is also avialabe in [here:]
|
StackExchange community!
I need the guidance what to use for the generation sensor data in Matlab. My purpose is to optimize the exploration process of unknown environment using the multi-robot system. For that, I need to simulate the sonar sensor signals. The question is how to simulate them in Matlab?
I found sensorsig module in Phased Array System Toolbox and the link to use specialized ROS messages. Maybe, do you know some other solutions?
The requirements to sensors are a) to know the frontier points b) to detect obstacles. The noise can be ignored.
|
I am building a quadcopter and I am on the last part of it (normally the exciting part) where I bind the controller to the receiver.
I can get the receiver, which is plugged into the quadcopter, to be recognized by the controller. The problem is that I can't get the controller to rotate the propellers.
When I touch the receiver it is hot. Not just warm, but very hot. I replaced the receiver already since it looked like the heat was too much and melted something on it.
Any thoughts as to why the receiver is so hot? Maybe it's because I hooked something up incorrectly? Perhaps the heat is causing the controller not to turn the motors?
Any thoughts or suggestions would be appreciated.
HARDWARE:
Controller - FlySky FS-i6
Receiver - FS-iA6 (6 channels, 2.4 GHZ)
Power Supply - Tattu R-Line 14.8V 1300mAh
The board it connects to is using 5v.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.