instruction
stringlengths 40
28.9k
|
---|
i can't puplish mesage up wifi ..
can you help me ?
sorry I don't speak English well.
thank you
Originally posted by turtle on ROS Answers with karma: 17 on 2014-09-09
Post score: 0
Original comments
Comment by BennyRe on 2014-09-09:
If you provide more information we can help you.
Comment by turtle on 2014-09-09:
thank you
i just can publish mesage in my pc
i can't publish to other pc in LAN
|
Hi ROS Users,
The ROS community has grown tremendously over the last 2 years, and it is good to be part of this growing community. Previously I tried to install ROS (Groovy) on Raspberry Pi using this tutorial and it was successful (Debian installation). I also understand that there is Hydro Installation (installation from Source) which requires lot of time for compilation. So now My questions are,
how should one proceed to install the latest version of ROS (For eg. Indigo ) on Raspberry Pi from source?
how to create a ROS Indigo debian release, after installing from source (I would like to try this)?
it is also much appreciated if someone can give pointers to a debian installation of Hydro on Raspberry pi. Thanks again, and thanks to the whole community,
Best Regards,
Murali
Originally posted by MKI on ROS Answers with karma: 246 on 2014-09-09
Post score: 1
Original comments
Comment by Airuno2L on 2014-09-09:
It would be nice if there was a collection of images somewhere for common hardware such as the Raspberry Pi and Beaglebone Black with Ubuntu + ROS preinstalled.
Comment by ccapriotti on 2014-09-09:
This has already been discussed on other topics. Turns out the cost (money and man hours) to keep this kind of images AND maintain binaries+repositories is high.
Not practical in the end. Community goals do not lean that way. (which is a nice way to say that those platforms are not that popular).
|
The tutorials for the navigation_2d package are provided here. The first 3 work flawlessly and, being a beginner, they have taught me a lot. However, the next 2 tutorials have very little documentation, namely the ones corresponding to the tutorial4.launch file in the package and a tutorial that helps one implement their own exploration strategy. They can be found at these links:
tutorial4.launch
Exploration Strategy
I was wondering if anyone had managed to successfully implement tutorial4.launch file, the one which deals with multi-robot exploration and mapping. I would greatly appreciate it if you could give me a few pointers on how to set it up or maybe direct me to a link with a more detailed explanation of the working methodology.
Thank You!
Originally posted by Ashwin27 on ROS Answers with karma: 34 on 2014-09-09
Post score: 0
Original comments
Comment by sobot on 2015-05-21:
Hello @Ashwin27, i have some problems to get nav2d exploration run on my turtlebot, may i ask what equipments did you use for your tasks with nav2d?
|
ros indigo
ubuntu 14.04 trusty
when i launch:
roslaunch clam_moveit_config move_group.launch
i get:
[ERROR] [1410255339.968987526]: Exception while loading planner 'ompl_interface/OMPLPlanner': According to the loaded plugin descriptions the class ompl_interface/OMPLPlanner with base class type planning_interface::PlannerManager does not exist. Declared types are
Available plugins:
[ WARN] [1410255340.750304310]: MoveGroup running was unable to load ompl_interface/OMPLPlanner
contents of ompl_planning_pipeline.launch :
<launch>
<!-- OMPL Plugin for MoveIt! -->
<arg name="planning_plugin" value="ompl_interface/OMPLPlanner" />
<!-- The request adapters (plugins) used when planning with OMPL.
ORDER MATTERS -->
<arg name="planning_adapters" value="default_planner_request_adapters/AddTimeParameterization
default_planner_request_adapters/FixWorkspaceBounds
default_planner_request_adapters/FixStartStateBounds
default_planner_request_adapters/FixStartStateCollision
default_planner_request_adapters/FixStartStatePathConstraints" />
<arg name="start_state_max_bounds_error" value="0.1" />
<param name="planning_plugin" value="$(arg planning_plugin)" />
<param name="request_adapters" value="$(arg planning_adapters)" />
<param name="start_state_max_bounds_error" value="$(arg start_state_max_bounds_error)" />
<rosparam command="load" file="$(find clam_moveit_config)/config/kinematics.yaml"/>
<rosparam command="load" file="$(find clam_moveit_config)/config/ompl_planning.yaml"/>
</launch>
ompl_planning.yaml :
planner_configs:
SBLkConfigDefault:
type: geometric::SBL
LBKPIECEkConfigDefault:
type: geometric::LBKPIECE
RRTkConfigDefault:
type: geometric::RRT
RRTConnectkConfigDefault:
type: geometric::RRTConnect
LazyRRTkConfigDefault:
type: geometric::LazyRRT
ESTkConfigDefault:
type: geometric::EST
KPIECEkConfigDefault:
type: geometric::KPIECE
RRTStarkConfigDefault:
type: geometric::RRTstar
BKPIECEkConfigDefault:
type: geometric::BKPIECE
arm:
planner_configs:
- SBLkConfigDefault
- LBKPIECEkConfigDefault
- RRTkConfigDefault
- RRTConnectkConfigDefault
- ESTkConfigDefault
- KPIECEkConfigDefault
- BKPIECEkConfigDefault
- RRTStarkConfigDefault
projection_evaluator: joints(shoulder_pan_joint,shoulder_pitch_joint)
longest_valid_segment_fraction: 0.05
gripper_group:
planner_configs:
- SBLkConfigDefault
- LBKPIECEkConfigDefault
- RRTkConfigDefault
- RRTConnectkConfigDefault
- ESTkConfigDefault
- KPIECEkConfigDefault
- BKPIECEkConfigDefault
- RRTStarkConfigDefault
Originally posted by jay75 on ROS Answers with karma: 259 on 2014-09-09
Post score: 4
|
Hi all,
i have the velocity topic /cmd_vel, and it's the type geometry_msgs::Twist, it is given in the object reference frame, and now i need to transform the velocity to another frame, such as my robot's base frame. How can i do that, if i know the translation and rotation between the two frames?
I think it is different from position transformation, because velocity is 6 dimension and position only 3.
Does someone have this knowledge? Any help is appreciated.
Originally posted by Qt_Yeung on ROS Answers with karma: 90 on 2014-09-09
Post score: 4
|
I am running ROS Indigo on a fresh install of Ubuntu 14.04 (Linux turtlebot 3.13.0-35-generic #62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux)
The only ROS package is the 1394-master branch as downloaded from github: https://github.com/ros-drivers/camera1394
When I try to run the camera1394_node, ros can't find the executable:
turtlebot@turtlebot:~/ros$ rosrun camera1394 camera1394_node
[rosrun] Couldn't find executable named camera1394_node below /home/turtlebot/ros/src/camera1394-master
Its the same with camera1394_nodelet:
turtlebot@turtlebot:~/ros$ rosrun camera1394 camera1394_nodelet
[rosrun] Couldn't find executable named camera1394_nodelet below /home/turtlebot/ros/src/camera1394-master
I don't see any errors when I catkin_make:
$ cd ~/ros/ && catkin_make
Base path: /home/turtlebot/ros
Source space: /home/turtlebot/ros/src
Build space: /home/turtlebot/ros/build
Devel space: /home/turtlebot/ros/devel
Install space: /home/turtlebot/ros/install
####
#### Running command: "cmake /home/turtlebot/ros/src -DCATKIN_DEVEL_PREFIX=/home/turtlebot/ros/devel -DCMAKE_INSTALL_PREFIX=/home/turtlebot/ros/install" in "/home/turtlebot/ros/build"
####
-- The C compiler identification is GNU 4.8.2
-- The CXX compiler identification is GNU 4.8.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Using CATKIN_DEVEL_PREFIX: /home/turtlebot/ros/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/indigo
-- This workspace overlays: /opt/ros/indigo
-- Found PythonInterp: /usr/bin/python (found version "2.7.6")
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/turtlebot/ros/build/test_results
-- Looking for include file pthread.h
-- Looking for include file pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.9
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - camera1394
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'camera1394'
-- ==> add_subdirectory(camera1394-master)
-- Using these message generators: gencpp;genlisp;genpy
-- Boost version: 1.54.0
-- Found the following Boost libraries:
-- thread
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.26")
-- checking for module 'libdc1394-2'
-- found libdc1394-2, version 2.2.1
-- camera1394: 0 messages, 2 services
-- Configuring done
-- Generating done
-- Build files have been written to: /home/turtlebot/ros/build
####
#### Running command: "make -j4" in "/home/turtlebot/ros/build"
####
Scanning dependencies of target _camera1394_generate_messages_check_deps_SetCameraRegisters
Scanning dependencies of target std_msgs_generate_messages_cpp
Scanning dependencies of target _camera1394_generate_messages_check_deps_GetCameraRegisters
Scanning dependencies of target camera1394_gencfg
[ 0%] Built target std_msgs_generate_messages_cpp
[ 4%] Generating dynamic reconfigure files from cfg/Camera1394.cfg: /home/turtlebot/ros/devel/include/camera1394/Camera1394Config.h /home/turtlebot/ros/devel/lib/python2.7/dist-packages/camera1394/cfg/Camera1394Config.py
Scanning dependencies of target std_msgs_generate_messages_lisp
[ 4%] Built target std_msgs_generate_messages_lisp
[ 4%] [ 4%] Built target _camera1394_generate_messages_check_deps_GetCameraRegisters
Built target _camera1394_generate_messages_check_deps_SetCameraRegisters
Scanning dependencies of target std_msgs_generate_messages_py
Scanning dependencies of target camera1394_generate_messages_cpp
Scanning dependencies of target camera1394_generate_messages_lisp
[ 4%] Built target std_msgs_generate_messages_py
[ 12%] [ 12%] Generating Lisp code from camera1394/SetCameraRegisters.srv
Generating C++ code from camera1394/SetCameraRegisters.srv
Scanning dependencies of target camera1394_generate_messages_py
Generating reconfiguration files for Camera1394 in camera1394
[ 16%] Wrote header file in /home/turtlebot/ros/devel/include/camera1394/Camera1394Config.h
Generating Python code from SRV camera1394/SetCameraRegisters
[ 20%] [ 20%] Built target camera1394_gencfg
Generating Lisp code from camera1394/GetCameraRegisters.srv
[ 25%] Generating Python code from SRV camera1394/GetCameraRegisters
[ 29%] Generating C++ code from camera1394/GetCameraRegisters.srv
[ 29%] Built target camera1394_generate_messages_lisp
[ 33%] Generating Python srv __init__.py for camera1394
[ 33%] Built target camera1394_generate_messages_py
[ 33%] Built target camera1394_generate_messages_cpp
Scanning dependencies of target camera1394_generate_messages
Scanning dependencies of target camera1394_nodelet
Scanning dependencies of target camera1394_node
[ 33%] Built target camera1394_generate_messages
[ 37%] [ 41%] [ 45%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/dev_camera1394.cpp.o
Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/camera1394_node.cpp.o
[ 50%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/driver1394.cpp.o
Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/nodelet.cpp.o
[ 54%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/features.cpp.o
[ 58%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/format7.cpp.o
[ 62%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/driver1394.cpp.o
[ 66%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/modes.cpp.o
[ 70%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/registers.cpp.o
[ 75%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/trigger.cpp.o
[ 79%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/dev_camera1394.cpp.o
[ 83%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/features.cpp.o
Linking CXX executable /home/turtlebot/ros/devel/lib/camera1394/camera1394_node
[ 83%] Built target camera1394_node
[ 87%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/format7.cpp.o
[ 91%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/modes.cpp.o
[ 95%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/registers.cpp.o
[100%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/trigger.cpp.o
Linking CXX shared library /home/turtlebot/ros/devel/lib/libcamera1394_nodelet.so
[100%] Built target camera1394_nodelet
Originally posted by benabruzzo on ROS Answers with karma: 79 on 2014-09-09
Post score: 0
Original comments
Comment by benabruzzo on 2014-09-10:
Tutorial fail:
source ./devel/setup.bash
//\ running this allows me to now run camera1394_node, the nodlet is still missing
|
Just wanted to know the state-of-the-art in path planners that incorporate dynamic obstacle avoidance. And some of the well known libraries for this sort of stuff.
UPDATE: I'm not talking about local planners like the dynamic window approach that approach dynamic obstacles as a static path planning problem with the additional dimension of time.
I'm looking for approaches like the velocity obstacle and others that model and/or predict the motion of obstacles and plan accordingly.
EDIT: Someone mentioned OMPL and MoveIt! in the comments. But AFAIK, OMPL is only for sampling based planners? And so these are not as useful for field robotics. Also, dynamic obstacle avoidance should involve much more than planning in a static space?
Thank you very much!
Originally posted by 2ROS0 on ROS Answers with karma: 1133 on 2014-09-09
Post score: 4
Original comments
Comment by bvbdort on 2014-09-17:
I think robotics.stackexchange is the right forum for this question.
Comment by 2ROS0 on 2014-09-17:
Sounds good. Thanks!
Comment by 2ROS0 on 2014-10-24:
Changed Q to be ROS specific.
Comment by Airuno2L on 2014-10-27:
Not sure why this was closed, it seems like a relevant discussion to have.
Comment by VEGETA on 2016-05-28:
I'd like to know the answer too. If you know anything plz send it here [email protected]
thanks
|
Hello
I'm trying to set up the navigation stack on a robot possessing only odometry sensors (wheel encoder, IMU published to /odom /imu). No camera/laser. The robot is also using robot_state_publisher to publish the transformations between robot coordinates.
I want to use only odometry sources (wheel encoder, IMU) to navigate the robot, so my plan is to use robot_pose_ekf to fuse the odometry data and use fake_localization to localize the robot on the known map (since no laser is involved).
So I was wondering where I can change the default parameters in order to use robot_pose_ekf and fake_localization in my navigation system. Also, I could not find the launch file for robot_pose_ekf in the package directory (/opt/ros/hydro/share/robot_pose_ekf) as mentioned in the tutorial. There is no launch file for fake_localization in /opt/ros/hydro/share/fake_localization either.
Thanks
Originally posted by ROSCMBOT on ROS Answers with karma: 651 on 2014-09-09
Post score: 0
Original comments
Comment by ahendrix on 2014-09-09:
The fake_localization package only contains a node. You'll have to write your own launch file for it.
|
Is there an equivalent package to camera pose calibration in ROS Hydro? Finding the exact transform connecting the frame of a new camera to an existing robot's tf tree is almost impossible ...
Originally posted by atp on ROS Answers with karma: 529 on 2014-09-09
Post score: 4
|
Hi,
I am new to ROS community. I have dedicated build server for all ROS packages. Once a package is successfully build I want to transfer only package binaries to target machine.
The target machine has ROS installed. Currently I am following approach.
catkin_make -install -DCMAKE_INSTALL_PREFIX package_name
compress the install directory
transfer and decompress the ROS package
Update ROS_PACKAGE_PATH on target and
rosrun pacakge_name executable
However ROS is failing to locate the package.
I am guessing I need to update the scripts setup scripts generated in build process. However, these scripts have hardcoded values of build project path so cannot directly copy paste.
I even tried to replace catkin workspace directory path with installed path on target machine. Still rosrun cannot find the package.
Can anyone please guide me in this regard?
I am using ROS 1.11.8 on Ubuntu 14.04
Originally posted by Jarvis on ROS Answers with karma: 67 on 2014-09-09
Post score: 2
Original comments
Comment by pachuc on 2014-09-16:
Were you able to figure something out?
Comment by Jarvis on 2014-10-28:
Yes. I have wrote a script to provide this functionality. One requirement for my script to work is to provide install rule for each package and run catkin_make install to build packages. If you need I can upload it in separate GitHub repo and provide link here.
|
Whenever I catkin_make I get all this unnecessary output like:
"set ARCHDIR to Linux64
use the location of executables to set EUSDIR
set EUSDIR to /opt/ros/hydro/share/euslisp/jskeus/eus"
after every single object is compiled.
How do I silence this?
Thanks
Arjun
Originally posted by agmenon on ROS Answers with karma: 1 on 2014-09-09
Post score: 0
|
I'm using AMCL and it does a good job of estimating my position and accounting for drift, looking at the particles cloud. The odometry is periodically updated from AMCL but then reverts back to the drifted solution. Am I supposed to subscribe to amcl_pose and update my odometry node? or something else?
Originally posted by jseal on ROS Answers with karma: 258 on 2014-09-09
Post score: 0
Original comments
Comment by Sebastian Kasperski on 2014-09-10:
amcl_pose should contain the robots position in the map. So you have to subscribe to this or to tf and get the 'map' -> 'robot' transform. What do you mean with updating the odometry node?
Comment by jseal on 2014-09-10:
My odometry node publishes a transform from base_link to odom, which is still being updated from the drifted odometry. It also publishes a static transform from odom to map.
|
Is there a way to set cmake variables in one package and export them to all depending packages?
For clarity:
If I set a cmake variable in one packages CMakeLists.txt like:
set( My_Var "Hello World" )
I can it now has the value in this package, I can e. g. echo it like
message( "My_Var = ${My_Var}" )
which will result in the output:
My_Var = Hello World
However, if I put the same output in another package (which depends on the previous one), the result is empty:
message( "My_Var = ${My_Var}" )
Resulting output:
My_Var =
Apparently, both packages have separate cmake namespaces. Now Is there a way to set a variable in one package, the mark it in some kind of way to be exported into depending packages and access its value in the depending packages?
Originally posted by Wolf on ROS Answers with karma: 7555 on 2014-09-10
Post score: 2
|
Hi all!
I'm working with arduino car under directly ROS topic command. I have a arduino uno board with Arduino Sensor Shield v5.0 installed. I'm running the basic publish and subscribe tutorial from rosserial:
http://wiki.ros.org/rosserial_arduino/Tutorials/Hello%20World
http://wiki.ros.org/rosserial_arduino/Tutorials/Blink
When using USB shown as dev/ttyACM0, things are doing well.
Then, I'm trying to connect with HC-05 bluetooth module. First I connect it with command:
sudo rfcomm connect /dev/rfcomm0 00:06:71:00:3E:87 1
And the
Then launching rosserial as before with additional argument :
rosrun rosserial_python serial_node.py _port:=/dev/rfcomm0 _baud:=9600
With the tutorial code on the car:
#include <ros.h>
#include <std_msgs/String.h>
ros::NodeHandle nh;
std_msgs::String str_msg;
ros::Publisher chatter("chatter", &str_msg);
char hello[13] = "hello world!";
void setup()
{
nh.getHardware()->setBaud(9600);
nh.initNode();
nh.advertise(chatter);
}
void loop()
{
str_msg.data = hello;
chatter.publish( &str_msg );
nh.spinOnce();
delay(1000);
}
The terminal become a waterfall of running warning:
[INFO] [WallTime: 1410329846.797489] ROS Serial Python Node
[INFO] [WallTime: 1410329846.814548] Connecting to /dev/rfcomm0 at 9600 baud
[WARN] [WallTime: 1410329849.792440] Serial Port read returned short (expected 72 bytes, received 8 instead).
[WARN] [WallTime: 1410329849.793548] Serial Port read failure:
[INFO] [WallTime: 1410329849.794408] Packet Failed : Failed to read msg data
[INFO] [WallTime: 1410329849.795036] msg len is 8
[WARN] [WallTime: 1410329850.814268] Serial Port read returned short (expected 16 bytes, received 13 instead).
[WARN] [WallTime: 1410329850.815325] Serial Port read failure:
[INFO] [WallTime: 1410329850.816327] Packet Failed : Failed to read msg data
[INFO] [WallTime: 1410329850.816984] msg len is 8
For most of the time its complaining about expected 72 bytes.
And thetopic,
rostopic info chatter
will return result (hello world!) quite randomly (it correctly shows with 1 Hz when using USB)
I've done another experiment on subscribe function. Arduino Car subscribe to std_msgs/Empty and topic is published by
rostopic pub toggle_led std_msgs/Empty --rate=1
The result is similar: some of the command can arrived (by moving the sonar servo) but quite randomly, and sometimes move more then 1 time in 1 second (published in 1Hz).
I've tried to read the source but still couldn't locate the problem.
Any help or suggestion are very welcome, thanks.
edit:
It truns out it is the problem of baudrate of my bluetooth module! The chip (YFRobot) is a china made cheap one and not is a real HC-06 or any official supported chip. The common method of setting baudrate in a console just won't work. There is something like a single post in some unkown chinese forum that provides the datasheet (Luckily I can read simplified Chinese ^^). After a weird setup process, it's fine now, except that the module just won't work beyond exceed certain rate (57600 I think).
Originally posted by EwingKang on ROS Answers with karma: 78 on 2014-09-10
Post score: 2
Original comments
Comment by EwingKang on 2014-09-10:
ahendrix hello.
I've never set the baud rate other then nh.getHardware()->setBaud() in bluetooth module. But if the baud rate is wrong, shouldn't it be completely unable to communicate? Using Arduino IDE with it's serial monitor and function like Serial.begin/Serial.write have no problem at all.
Comment by ahendrix on 2014-09-10:
Most bluetooth modules have a UART buad rate that cannot be set through normal software. They usually have some sort of AT command set for modifying the baud rate.
Comment by ahendrix on 2014-09-10:
Since the arduino serial console and Serial.write work over bluetooth, that means that the baud rate setting you're using matches the bluetooth module's setting, and it means that the problem is elsewhere.
Comment by EwingKang on 2014-09-10:
Mmmmm ahendrix thank you. hope this can be fix. I'll keep trying.
Comment by 130s on 2015-03-23:
@EwingKang I suggest you re-post the solution as an answer and select it as the right-answer so that others can tell this question has an answer.
Comment by EwingKang on 2015-03-23:
@130s OK, I see. I'm not sure whether if it is okay to do so here at ros answers. Some of the Q&A site forbid self-answering the question. Anyway, thanks.
edit: I see the line " you are encourage to answer you own...." when I editing my answer ^_^
|
In this description it is said that ApproximateTime works without any epsilon to set. So I infer that ExactTime need an epsilon to be set to allow slightly different timestamps to match. But I found no methods to do that in the API reference. Ho can I set them? I already tried ApproximateTime but it's too slow. An
Originally posted by mark_vision on ROS Answers with karma: 275 on 2014-09-10
Post score: 0
|
ubuntu 14.04 trusty
ros indigo
when i launch:
roslaunch clam_moveit_config moveit_rviz.launch
i get :
[ERROR] [1410332944.683556716]: PluginlibFactory: The plugin for class 'moveit_rviz_plugin/MotionPlanning' failed to load. Error: According to the loaded plugin descriptions the class moveit_rviz_plugin/MotionPlanning with base class type rviz::Display does not exist. Declared types are rviz/Axes rviz/Camera rviz/DepthCloud rviz/Effort rviz/FluidPressure rviz/Grid rviz/GridCells rviz/Illuminance rviz/Image rviz/InteractiveMarkers rviz/LaserScan rviz/Map rviz/Marker rviz/MarkerArray rviz/Odometry rviz/Path rviz/PointCloud rviz/PointCloud2 rviz/PointStamped rviz/Polygon rviz/Pose rviz/PoseArray rviz/Range rviz/RelativeHumidity rviz/RobotModel rviz/TF rviz/Temperature rviz/WrenchStamped rviz_plugin_tutorials/Imu
i need to install something, apt-get install [what?]
Originally posted by jay75 on ROS Answers with karma: 259 on 2014-09-10
Post score: 1
|
I am configuring Eclipse to enable autocompletion for programming using rospy. So, can anyone please provide the Paths in Eclipse for the following Variables :
PATH
PYTHONPATH
ROS_MASTER_URI
ROS_PACKAGE_PATH
ROS_ROOT
I have followed all the steps from the tutorial here. http://wiki.ros.org/IDEs
But, when I run the test.py file, it gives an error saying "No Module named rospy"
I think, I am not setting the environment variables correctly. Hence, Please someone provide the environment variables you use so that I can correct the mistake if any.
Also, if somebody of you have any other suggestions, I would be pleased to know
Thanks,
Originally posted by ish45 on ROS Answers with karma: 151 on 2014-09-10
Post score: 0
|
I would like to link to roscpp in an external software such to communicate with the ROS system running on my robot.
The scope is to use my Qt5 GUI to control and configure the robot on different systems (Windows, Linux, Android...).
Is there a guide or examples about this process?
Thank you
Walter
Originally posted by Myzhar on ROS Answers with karma: 541 on 2014-09-10
Post score: 1
|
Hello folks,
I am using Sick LM291 and usb serial converter.
Ubuntu 12.04 intell 64 bits and hydro.
In fuerte, there was no problem to use this laser but we recently change the ros version and this is the output of the laser LM291 sicktoolbox_wrapper:
$ rosrun sicktoolbox_wrapper sicklms
*** Attempting to initialize the Sick LMS...
Attempting to open device @ /dev/ttyUSB0
Device opened!
Attempting to start buffer monitor...
Buffer monitor started!
Attempting to set requested baud rate...
A Timeout Occurred! 2 tries remaining
A Timeout Occurred! 1 tries remaining
A Timeout Occurred - SickLIDAR::_sendMessageAndGetReply: Attempted max number of tries w/o success!
Failed to set requested baud rate...
Attempting to detect LMS baud rate...
Checking 19200bps...
A Timeout Occurred! 2 tries remaining
A Timeout Occurred! 1 tries remaining
A Timeout Occurred - SickLIDAR::_sendMessageAndGetReply: Attempted max number of tries w/o success!
Checking 38400bps...
Detected LMS baud @ 38400bps!
Operating @ 38400bps
Attempting to sync driver...
Driver synchronized!
*** Init. complete: Sick LMS is online and ready!
Sick Type: Sick LMS 291-S05
Scan Angle: 100 (deg)
Scan Resolution: 0.25 (deg)
Measuring Mode: 8m/80m; fields A,B,Dazzle
Measuring Units: Centimeters (cm)
[ INFO] [1410366255.940479716]: Variant setup not requested or identical to actual (100, 0.250000)
[ INFO] [1410366255.940548802]: Measuring units setup not requested or identical to actual ('Centimeters (cm)')
[ WARN] [1410366255.940781220]: You are using an angle smaller than 180 degrees and a scan resolution less than 1 degree per scan. Thus, you are in inteleaved mode and the returns will not arrive sequentially how you read them. So, the time_increment field will be misleading. If you need to know when the measurement was made at a time resolution better than the scan_time, use the whole 180 degree field of view.
Requesting measured value data stream...
Data stream started!
A Timeout Occurred - SickLIDAR::_recvMessage: Timeout occurred!
[ERROR] [1410366261.971579984]: Unknown error.
terminate called after throwing an instance of 'SickToolbox::SickThreadException'
Aborted (core dumped)
The most strange thing is that the node indentify correctly the laser but when try to publish, appear segmentation fault with unknown error.
Tell me if I am doing something wrong or is just package problem. Some related problems but not the same here:
http://answers.ros.org/question/99096/using-sicktoolbox-on-hydro/
Best regards.
************* EDIT ***************
I try it in indigo and it works fine. Maybe some problem related to hydro branch?
Originally posted by pmarinplaza on ROS Answers with karma: 330 on 2014-09-10
Post score: 0
|
Hi, I am new in ROS and I want to do a SLAM using gmapping package with "laser scan" and "odometry data".
In the first step I just want to see the current map provided with laser scan in fixed location (x = 0, y = 0, theta = 0) in rviz.
I have laser scan topic but I don't have a odometry data yet, so I want to make a fake odometry data. For this I think I should publish a TF data with all zeros.
I think I should do some thing like this link:
1
2
3
Am I in the right path? could you please tell me what should I do in details? or show me some references?
Originally posted by AliAs on ROS Answers with karma: 63 on 2014-09-10
Post score: 0
|
Hi,
I have trouble with a nodelet which is responsible for converting depth image to laser scan. I have the required topics namely /camera/depth/camera_info and /camera/depth/image_raw and here is how my launch file looks like:
<launch>
<arg name="camera" value="camera"/>
<arg name="manager" value="$(arg camera)_nodelet_manager" />
<group ns="$(arg camera)">
<node pkg="nodelet" type="nodelet" name="$(arg manager)" respawn="true" args="manager"/>
<node pkg="nodelet" type="nodelet" name="openni2_camera" args="load openni2_camera/OpenNI2DriverNodelet $(arg manager)" respawn="true">
</node>
<node pkg="nodelet" type="nodelet" name="depthimage_to_laserscan_loader" args="load depthimage_to_laserscan/DepthImageToLaserScanNodelet $(arg manager)">
<param name="scan_height" value="10"/>
<param name="output_frame_id" value="/depth_camera_link"/>
<param name="range_min" value="0.3"/>
<remap from="image" to="/depth/image_raw"/>
<remap from="scan" to="/scan"/>
</node>
</group>
</launch>
My problem is that the /scan topic is empty and is not being published. However when I do
rosrun depthimage_to_laserscan depthimage_to_laserscan image:=/camera/depth/image_raw
the /scan topic is being published as expected. Do you have any suggestions?
Thanks a lot in advance.
Originally posted by zeinab on ROS Answers with karma: 88 on 2014-09-10
Post score: 1
|
I can't overcome the above problem. If I start to remove packages to install pyqt4-10, too much other apps would need to be removed (some of them I really need). Does anyone have a solution for this? Manual installation on an other machine didn't really help, the dependency problem strikes me there too.
Cheers
Paul
Originally posted by poseidon on ROS Answers with karma: 1 on 2014-09-10
Post score: 0
|
Hi.
This question is related to this one: http://answers.ros.org/question/173804/generate-deb-from-ros-package/
Basically, what we need to do is to provide our ros packages to project partners and clients.
But we do not want to distribute the source code for some of the nodes (some yes, and some no).
And we don’t want either to appear yet in the official ROS buildfarm/jenkins/repo for project confidentiality required by some clients.
And we will want in the future to add obfuscation of python code and license management for C++ code with LMX for example...
So we’ve been digging around the ROS release process.
But things are not really clear to us yet.
Would you know how to do, or how would you advise us to do in order to generate such installers so that our clients can easily install it and that we can check release numbers, and send only compiled code.
I think there are 2 ways:
The “easier” but not sure if it works well: generate a tar.gz of the catkin workspace after having cleaned it from source code…. Do you have experience on this? Do we need to remove the devel folder? if we remove the src folder this doesn't work, do we need to leave at least the config and lanch files? more?... Is this documented somewhere? Maybe we need to compile catkin with release flag?
The more complex but cleaner and more long-term: generating a debian and putting it on a server.
I’ve seen that we can use Bloom to generate a debian automatically in GitHub. And then there is pull generated to populate the ROS buildfarm…
But then, where is it compiled? In local? By github? By ros.org?
How can we specify/include in the debian only compiled code and not source code? (or a mix)
Can we generate only one debian for a metapackage? or will we have to generate as many .debs as packages?
Do you know how to disable this pull to ros.org? and to github (or how to configure the deb to be generated on our git server?)
If we don’t publish it on ros.org, can we still get the track feature to manage release version and tags etc…
If/when we'll have generated the .deb, what kind of server would we need to implement to get a private server to be able to run apt-get? (or any similar method) Might PPA be the solution? Or can one call an apt-get on a private git server since the release debian will be there?
And finally, how/where does the obfuscation popsup in this process?
Thanks in advance for any advice and guidelines.
You will have understood that we know more or less what we want, but have little idea of the available options and how to implement this... what a program :-)
For the question point 2, for a first package that in particular defines its own services, we started to follow some steps found here http://answers.ros.org/question/173804/generate-deb-from-ros-package/ but without success yet...
We do run bloom-generate rosdebian --os-name ubuntu --os-version precise --ros-distro hydro without error in the prompt.
But when running fakeroot debian/rules binary the compiler doesn’t find the include files that are located in the "devel/include/" directory of the catkin workspace and thus aborts.
Those header files are auto-generated by catkin_make when defining services in my main code... Any idea on to solve this?
Also, could you elaborate on the 2 alternatives you are proposing: checkinstall and dpkg-buildpackage ? Thanks in advance.
Thanks in advance
Damien
Originally posted by Damien on ROS Answers with karma: 203 on 2014-09-10
Post score: 8
Original comments
Comment by William on 2014-09-12:
@Damien I haven't forgotten about this, I'm at ROSCon right now, so it might be a few days before I get to this. Sorry!
|
I want to know what is the convention for creating ROS node handles. I have my ROS nodes, but I realized that for every subscriber/publisher in my node I have created an individual node handle.
I am doing this in one node/program, for example:
ros::NodeHandle motor_nh;
ros::NodeHandle velocity_nh;
ros::NodeHandle vel_callback_nh;
ros::NodeHandle imu_nh;
ros::NodeHandle alarm_sound_nh;
I am not sure if that is the correct usage, or am I supposed to have one single node handle for the entire node/program?
Thanks
Originally posted by Pototo on ROS Answers with karma: 803 on 2014-09-10
Post score: 0
|
Related to the answer to this question: http://answers.ros.org/question/192057/grab-single-frame-from-gazebo-camera/.
I'm trying to write a C++ program that will allow me to capture a single frame from a camera in Gazebo and will be basing it on gazebo_ros_camera based on advice from that answer. Can someone explain to me what exactly is gazebo_ros_camera supposed to do? The source code documentation didn't really explain it, and the ROS wiki page only mentions it in passing in various places.
Originally posted by K. Zeng on ROS Answers with karma: 23 on 2014-09-10
Post score: 0
|
Hi everyone,
I started using the nav2d package and did the four tutorials with success.
In the 3rd tutorial you can make a robot autonomously explore an unknown map.
In the 4th, you can move two robots via joystick, exploring a map and have one robot localize itself via amcl.
What I'm trying to achieve is a sort of "mix" between the two tutorials, that is having two robots autonomously exploring and building the map while having one of them localizing itself.
I'm struggling on the rqt_graph and launch files, trying to understand how to make stuff work, but without results until now (I'm pretty new to ROS and don't know how the nav2d package elements work in detail). Can someone help figure out how to achieve this goal?
Thank you very much for your kind help,
Regards.
Originally posted by Marco_F on ROS Answers with karma: 23 on 2014-09-11
Post score: 1
|
Hello,
I have a urdf which does not display its material correctly, here is the code:
<link name="chassis" >
<visual>
<geometry>
<box size="${sheet_sx} ${sheet_sy} ${sheet_sz}" />
</geometry>
<origin xyz="0.0 0.0 ${(chassis_sz + sheet_sz) / 2.0}" rpy="0.0 0.0 0.0" />
<material name="myColor">
<color rgba="0.0 0.0 1.0 1.0"/>
</material>
</visual>
<collision>
<geometry>
<box size="${chassis_sx} ${chassis_sy} ${chassis_sz}" />
</geometry>
<origin xyz="0.0 0.0 0.0" rpy="0.0 0.0 0.0" />
</collision>
<xacro:box_inertia sizeX="${chassis_sx}" sizeY="${chassis_sy}" sizeZ="${chassis_sz}" mass="${mb_mass - sheet_mass - 4 * wheel_mass}">
<origin xyz="0.0 0.0 0.0" rpy="0 0 0" />
</xacro:box_inertia>
</link>
The box should be blue and it is actually white in gazebo (it is red in RViz for some reason I do not know).
I tried many different ways to define the material (in a separate file, at the beginning of the file...) nothing seems to work.
I am using precise with hydro.
Any idea?
Originally posted by arennuit on ROS Answers with karma: 955 on 2014-09-11
Post score: 0
|
Hi there,
I have the RGBD SLAM v2 package (http://felixendres.github.io/rgbdslam_v2/) running on a Core i7 computer with a Nvidia GTX 780 graphics card. The computer runs Ubuntu 12.04.5 and ROS Hydro.
I want to use the package for robot localization, but it seems like I'm getting a too low pub rate of the transform.
In my launch file I have:
<node pkg="tf" type="static_transform_publisher" name="map_to_local_origin" output="screen" args="0 0 0 0 0 0 /map /local_origin 10" />
<node pkg="tf" type="static_transform_publisher" name="mav_to_camera" output="screen" args="-0.05 0.15 0.8 0 1.570796 0 vision camera_link 10" />
which are transforms that I need to exist for sending the pose estimation for the robot. I also have:
<!-- TF information settings -->
<param name="config/fixed_frame_name" value="/map"/>
<param name="config/ground_truth_frame_name" value=""/><!--empty string if no ground truth-->
<param name="config/base_frame_name" value="vision"/> <!-- /openni_camera for hand-held kinect. For robot, e.g., /base_link -->
<param name="config/fixed_camera" value="false"/> <!--is the kinect fixed with respect to base, or can it be moved (false makes sense only if transform betw. base_frame and openni_camera is sent via tf)-->
where 'vision' is considered the base of the robot. 'till now everything is ok. The problem comes when I run the package and start processing. Using the tf view_frames, I can check the following on the tf tree:
Broadcaster: /rgbdslam
Average rate: 10000.000 Hz
Most recent transform: 1409946186.276
Buffer length: 0.000 sec
which for sure is not right, given that as said in http://answers.ros.org/question/54240/using-robot_pose_ekf-and-rgbdslam-for-path-planning-with-octomaps/?answer=54312#post-id-54312 the tf transform is being sent at a 10Hz rate supposedly, which I know it can be less, but less can't be what is in the following: (Issuing 'rosrun tf tf_monitor')
Node: /rgbdslam 0.202653 Hz, Average Delay: 1.97535 Max Delay: 11.3705
Which is extremely slow! Does someone has a tip for what is happening or why is it so slow?
Thanks in advance!
Originally posted by TSC on ROS Answers with karma: 210 on 2014-09-11
Post score: 0
Original comments
Comment by msandertar on 2015-05-09:
Hi, I have the same problem, using ros indigo dist. Some tf pubs seems to have 10000 HZ frecuency although I publish them at 10 HZ.
Could you solve your problem? How did you solve it?
Thanks.
|
ubuntu 14.04 trusty
ros indigo
as soon as my pick place node reaches pick the movegroup launch terminal gives this error :
[ INFO] [1410435960.615041253]: Planning attempt 1 of at most 1
move_group: /usr/include/eigen3/Eigen/src/Core/DenseStorage.h:78: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 16>::plain_array() [with T = double; int Size = 16; int MatrixOrArrayOptions = 0]: Assertion `(reinterpret_cast<size_t>(eigen_unaligned_array_assert_workaround_gcc47(array)) & 0xf) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/group__TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! ****"' failed.
[move_group-1] process has died [pid 6922, exit code -6, cmd /opt/ros/indigo/lib/moveit_ros_move_group/move_group __name:=move_group __log:=/home/jaysin/.ros/log/d4d874c0-39a8-11e4-a0cf-28e347742258/move_group-1.log].
log file: /home/jaysin/.ros/log/d4d874c0-39a8-11e4-a0cf-28e347742258/move_group-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
i have gcc 4.8 installed and have been to the mentioned website, but not being too confident about what to do, i would like to know if there is a simple fix for this, and if no simple fix is available then what course of action i should take?
Originally posted by jay75 on ROS Answers with karma: 259 on 2014-09-11
Post score: 0
|
Hi,
I am currently trying to extract the world co-ordinates from a pointcloud reconstructed scene. I am currently using ROS-Hydro.
I have followed these two as my reference.
http://answers.ros.org/question/98011/how-to-convert-pclpointcloud2-to-pointcloudt-in-hydro/
http://wiki.ros.org/pcl_ros
I am subscribing to the pointcloud message generated by the proc stereo_img_proc.
My code is something like this.
#include <ros/ros.h>
#include <sensor_msgs/PointCloud2.h>
#include <pcl/conversions.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <opencv/highgui.h>
#include <cv_bridge/cv_bridge.h>
#include <iostream>
#include <pcl/io/pcd_io.h>
#include <boost/foreach.hpp>
#include <pcl/PCLPointCloud2.h>
#include <pcl_conversions/pcl_conversions.h>
void cloud_cb (const sensor_msgs::PointCloud2 pc)
{
pcl::PCLPointCloud2 pcl_pc;
pcl_conversions::toPCL(pc, pcl_pc);
pcl::PointCloud<pcl::PointXYZ> cloud;
pcl::fromPCLPointCloud2(pcl_pc, cloud);
for(int i=0;i<cloud.height;i++)
{
for(int j=0; j<cloud.height;j++)
{
BOOST_FOREACH (const pcl::PointXYZ& pt, cloud.points);
}
}
}
I want to have the depth value for every pixel and then publish it in form of a sensor_msgs::Image message.
Please guide me.
Edit:
I have made some modifications.
for(int j=0; j<cloud.points.size();j++)
{
float x = cloud.points[j].x;
float y = cloud.points[j].y;
float z = cloud.points[j].z;
}
where x,y,z are the world co-ordinates as extracted from cloud. Will this work??
Originally posted by m1ckey on ROS Answers with karma: 21 on 2014-09-11
Post score: 2
|
Hi,
I am creating a catkin package that adds some .c and .cpp files to the library. The .c files come from some very old project and it can be built without ROS. When I try to catkinize the project, I dont want to modify these .c files but just add them to catkin library, together with other .cpp files.
However, some of the c files have #include, #include, and when I do catkin_make, it says iostream/iomanip no such file.
Is there a way of using catkin_make without changing any original c files.
Thanks,
Yiming
Originally posted by Yiming.Yang on ROS Answers with karma: 1 on 2014-09-11
Post score: 0
|
I used the source code method to build the "Quadrotor outdoor flight demo" as the ros-hydro-hector-quadrotor-demo cannot be located. However, I cannot proceed with installation as I always encountered the following error message when doing "catkin_make", as shown in the following. The OS I am using is Ubuntu 14.04 and ROS Indigo.
viki@c3po:~hector_quadrotor_tutorial$ catkin_make
........
........
-- Eigen found (include: /usr/include/eigen3)
-- +++ processing catkin package: 'hector_geotiff_plugins'
-- ==> add_subdirectory(hector_slam/hector_geotiff_plugins)
-- Using these message generators: gencpp;genlisp;genpy
-- +++ processing catkin package: 'hector_marker_drawing'
-- ==> add_subdirectory(hector_slam/hector_marker_drawing)
-- Eigen found (include: /usr/include/eigen3)
-- +++ processing catkin package: 'hector_quadrotor_controller'
-- ==> add_subdirectory(hector_quadrotor/hector_quadrotor_controller)
-- Using these message generators: gencpp;genlisp;genpy
CMake Error at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:75 (find_package):
Could not find a package configuration file provided by
"hardware_interface" with any of the following names:
hardware_interfaceConfig.cmake
hardware_interface-config.cmake
Add the installation prefix of "hardware_interface" to CMAKE_PREFIX_PATH or
set "hardware_interface_DIR" to a directory containing one of the above
files. If "hardware_interface" provides a separate development package or
SDK, be sure it has been installed.
Call Stack (most recent call first):
hector_quadrotor/hector_quadrotor_controller/CMakeLists.txt:7 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/viki/hector_quadrotor_tutorial/build/CMakeFiles/CMakeOutput.log".
See also "/home/viki/hector_quadrotor_tutorial/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed
...
Could you help fix this? Thanks.
Originally posted by UAS on ROS Answers with karma: 1 on 2014-09-11
Post score: 0
|
While installing USARSim package, ros always gives an error "Unable to locate package ros-indigo-USARSim". i am using the following command to install USARSim Package:- "$sudo apt-get install ros-indigo-USARSim".
Originally posted by Aarif on ROS Answers with karma: 351 on 2014-09-11
Post score: 0
|
I'm brand new to ROS so bear with me:
I was hoping for some advise as to how to approach a project I am attempting. I have logic control written in Java (as an Android application) that I want to implement on a Dr. Robot Jaguar 4x4 robot (compatible with ROS Fuerte). My plan was to create a simple Android app that could just act as a passthrough converting java commands into ROS instructions, however as far as I can tell, Ros Java was designed to use Catkin and I can't figure out how to build the Jaguar drivers with Catkin. I really appreciate any suggestions.
Originally posted by navy_robots on ROS Answers with karma: 1 on 2014-09-11
Post score: 0
|
After installing jsk_common, which included the install of jsk_recognition and jsk_libfreenect2, did a source ~/ros/hydro/devel/setup.bash, complied by executing catkin_make in the catkin work folder ~/ros/hyro. After considerable script activity, following error message appeared.
[roseus.cmake] compile installed package sound_play
-- Using these message generators: gencpp;geneus;genlisp;genpy
-- +++ processing catkin package: 'jsk_2014_06_pr2_drcbox'
-- ==> add_subdirectory(jsk-ros-pkg/jsk_demos/jsk_2014_06_pr2_drcbox)
-- +++ processing catkin package: 'jsk_rosjava_messages'
-- ==> add_subdirectory(jsk-ros-pkg/jsk_smart_apps/jsk_rosjava_messages)
-- Configuring incomplete, errors occurred!
Invoking "cmake" failed.
No other error messages. Is there a log file to review for more detail?
openjdk-6-jre and openjdk-7-jre are installed
Originally posted by RobotRoss on ROS Answers with karma: 141 on 2014-09-11
Post score: 0
|
EDIT: I still haven't found a definitive answer (so I am not posting this as an answer), but I got frustrated, re-installed ROS, and took great care to update my ROS_PACKAGE_PATH environment variable as instructed here. I would still like to see a complete explanation of the solution if possible, but now I can let the community know that this question is not longer urgent for me.
Hi. I am trying to complete the actionlib tutorials. This issue is mostly unrelated to actionlib. Can someone please help me resolve this problem?
I am trying to create a new catkin package for the tutorial. My shell commands go like this:
cd
source /opt/ros/hydro/setup.bash
mkdir -p ROS_TUTORIALS/actionlib_ws/src
cd ROS_TUTORIALS/actionlib_ws/src
catkin_init_workspace
cd ..
catkin_make
source devel/setup.bash
cd src
catkin_create_pkg learning_actionlib actionlib message_generation roscpp rospy std_msgs actionlib_msgs
# I add some files and change CMakeLists.txt as specified here:
# http://wiki.ros.org/actionlib_tutorials/Tutorials/SimpleActionServer%28ExecuteCallbackMethod%29
cd ../..
catkin_make
The error I get is this:
CMake Error: File /home/<username>/ROS_TUTORIALS/actionlib_ws/src/package.xml does not exist.
CMake Error at /opt/ros/hydro/share/catkin/cmake/stamp.cmake:10 (configure_file):
configure_file Problem configuring file
Call Stack (most recent call first):
/opt/ros/hydro/share/catkin/cmake/catkin_package_xml.cmake:61 (stamp)
/opt/ros/hydro/share/catkin/cmake/catkin_package_xml.cmake:39 (_catkin_package_xml)
/opt/ros/hydro/share/catkin/cmake/catkin_package.cmake:95 (catkin_package_xml)
CMakeLists.txt:7 (catkin_package)
This is preceded by some success/status messages, and further error messages follow (they are dependent on the above error) .
I have reviewed the tutorials, and repeated them several times, and my directory appears to have the correct structure. For clarification:
actionlib_ws/ # the catkin workspace
src/
CMakeLists.txt
learning_actionlib/
package.xml
CMakeLists.txt
...
src/
...
devel/
...
build/
...
Sorry if I wrote too much, I wanted to help whoever tries to answer my question as much as possible. Thanks in advance for any help!
Originally posted by T-R0D on ROS Answers with karma: 13 on 2014-09-11
Post score: 0
Original comments
Comment by BennyRe on 2014-09-12:
Thumbs up for providing that much information. Nowadays many new users simply say they have a problem and that's it.
|
i am using kinect in ROS. i am working with indigo, ubuntu 14.04. .. my doubt is how can i calculated depth of a point in image. My task is to locate a point in image using mouse and it will tell me the depth of that point.. i already installed openni, pcl. i am also confused of taking which topic i subscribe to calculate depth?
please help me step by step..
language c++
Originally posted by ASHISH CHHONKAR on ROS Answers with karma: 41 on 2014-09-11
Post score: 2
|
i can't publish int8 .. can you help me?
#include "ros/ros.h"
#include "std_msgs/Int8.h"
#include <sstream>
int main(int argc, char **argv)
{
ros::init(argc, argv, "talker");
ros::NodeHandle n;
ros::Publisher chatter_pub = n.advertise<std_msgs::Int8>("mytopic", 1000);
ros::Rate loop_rate(10);
int8_t count = 0;
while (ros::ok())
{
std_msgs::Int8 msg;
msg.data = count++;
ROS_INFO("%s", msg.data);
chatter_pub.publish(msg);
ros::spinOnce();
loop_rate.sleep();
}
return 0;
}
display : NULL
Originally posted by turtle on ROS Answers with karma: 17 on 2014-09-12
Post score: 0
|
Hi, All. I have a catkin workspace in eclipse and I am writing a Node. But, I need to import a python file from another catkin workspace which contains this file. But, when I try to import it, there is an error that there is no module named like that.
The exact error is like this -
File "src/Ish_odom_out_and_back.py", line 9, in
from rbx1_nav.transform_utils import quat_to_angle, normalize_angle
ImportError: No module named rbx1_nav.transform_utils
For example,
My Catkin Workspace is like this - catkin_ws_ishan/src/ishan/ishan_nav/src/move.py. ishan_nav is the package name in my workspace.
The file that I want to import is in - catkin_ws/src/rbx1/rbx1_nav/src/rbx1_nav/transform_utils.py. Here, rbx1_nav is the package. The file That I want to import is transform_utils.py
So, How do I import transform_utils.py from my program in Eclipse which is in another catkin workspace????
Can anybody please help me.
Thanks.
Originally posted by ish45 on ROS Answers with karma: 151 on 2014-09-12
Post score: 0
|
I have the following structure.
my_first_package exposes its headers:
catkin_package(
INCLUDE_DIRS include
# LIBRARIES
# CATKIN_DEPENDS
# DEPENDS systemlib
)
my_second_package depends on my_first_package. In my_second_package's CMakeLists.txt my_first_package is stated as CATKIN_DEPENDS:
catkin_package(
INCLUDE_DIRS include
# LIBRARIES
CATKIN_DEPENDS my_first_package
# DEPENDS systemlib
)
my_third_package depends on my_second_package. It is able to "see" headers and stuff of my_second_package and my_first_package. However, in my_third_package depends's CMakeLists.txt my_second_package is !!not!! stated as CATKIN_DEPENDS:
catkin_package(
INCLUDE_DIRS include
# LIBRARIES
# CATKIN_DEPENDS #NOTE: not set
# DEPENDS systemlib
)
Now: Will my_fourth_package which depends on my_third_package be able to "see" headers and stuff of my_first_package?
Originally posted by Wolf on ROS Answers with karma: 7555 on 2014-09-12
Post score: 1
|
Hi all,
Do you know if there is a way to visualize the inertia of the different links of a urdf? Something like RViz or gazebo displaying the inertia ellipsoid centered on the center of mass...
I could not find anything relevant. Any idea?
Thanks,
Antoine
Originally posted by arennuit on ROS Answers with karma: 955 on 2014-09-12
Post score: 1
|
I am trying to run camera calibration on my Odroid U3 running Lubuntu 13.04. I ran the following command :
rosrun uvc_camera uvc_camera_node width:=640 height:=480 frame:=camera device:=/dev/video0
after which a i got an error :
terminate called after throwing an instance of 'ros::InvalidNameException'
what(): Character [4] is not valid as the first character in Graph Resource Name [480]. Valid characters are a-z, A-Z, / and in some cases ~.
Aborted (core dumped)
running the same command without specifying height and width, i get the following error:
/opt/ros/hydro/lib/uvc_camera/uvc_camera_node: symbol lookup error: /opt/ros/hydro/lib/libimage_transport.so: undefined symbol: _ZN3ros7package10getPluginsERKSsS2_RSt6vectorISsSaISsEEb
How do I overcome this problem??
I have followed earlier tutorials successfully and have sourced required files.
Originally posted by rookie on ROS Answers with karma: 15 on 2014-09-12
Post score: 1
|
Hi, all ,I am using turtlebot2( i.e. kobuki) to test my local planner. The global_planner being used is the default navfn of move_base node (i.e. navfn). But some strange path planning happen due to the incorrect costmap (global costmap ?).
Please see following two images. The black and grey areas are with high cost from global costmap.
The red ellipses are the areas that should be with white. Because there are nothing on these area. Beforehand some moving objects such as persons were moving through these areas. But when a turtlebot robot is ordered to go from start points ( blue squares) to goal points (orange squares), there are nothing in these areas. That is why the planned path(green lines) get round these areas like there exists obstacles.
Here I give the yaml files that the move_base node is using.
local_costmap_params.yaml:
local_costmap:
global_frame: odom #was /odom
robot_base_frame: base_link #was /base_footprint
update_frequency: 5.0
publish_frequency: 5.0
static_map: false
rolling_window: true
width: 4.0
height: 4.0
resolution: 0.1
transform_tolerance: 0.5
global_costmap_params.yaml:
global_costmap:
global_frame: /map
robot_base_frame: base_link # was /base_footprint
update_frequency: 5.0
publish_frequency: 1.0
static_map: true
transform_tolerance: 0.5
cost_scaling_factor: 10.0
lethal_cost_threshold: 100
costmap_common_params:
max_obstacle_height: 0.60 # assume something like an arm is mounted on top of the robot
obstacle_range: 2.5
raytrace_range: 3.0
robot_radius: 0.18
inflation_radius: 0.50
observation_sources: scan bump
scan: {data_type: LaserScan, topic: /scan, marking: true, clearing: false}
bump: {data_type: PointCloud2, topic: mobile_base/sensors/bumper_pointcloud, marking: true, clearing: false}
How to change these grey areas into white area in time?
One more thing, I think the grey areas attached to static object (black bold lines from wall and desk) in my map are thicker than I request like the grey area around the desk, how to make them thin? Thank you
Edit: one picture is replaced.
Dear (@Fernando\ Herrero), Thank you for your attention. So I should change the inflation radius smaller to be 0.20, i.e. 0.20m.
However, please note the second picture. The white lines are the laser scan, so the area in red ellipse is scanned but still in grey. Why? Thank you!
Originally posted by scopus on ROS Answers with karma: 279 on 2014-09-12
Post score: 0
Original comments
Comment by fherrero on 2014-09-12:
The global costmap is updated with the scan if it has an obstacle layer.
If these areas are result of mobile obstacles, they should be cleared by the scan when you go there again and it's free. The thick areas are result of the obstacle inflation, to prevent the robot to get too close to obstacles
Comment by scopus on 2014-09-12:
However, please note the second picture. The white lines are the laser scan, so the area in red ellipse is scanned but still in grey. Why? Thank you!
|
Hi,
I have a kinect attached to an Intel J1900 based machine and the OpenNI examples run fine including the hand tracking example. Using OpenNI and SensorKinect. (https://github.com/OpenNI/OpenNI.git and https://github.com/avin2/SensorKinect.git both on unstable branch). This is an Ubuntu 14.04.1 installation with ROS indigo.
I have rgbdslam installed via catkin as the web page suggests. After some compile and linking glitches I got a binary. However, trying to run:
$ roslaunch rgbdslam openni+rgbdslam.launch
results in the rgbdslam-23 process segving and thus the launch as a whole entering the fail state.
process[rgbdslam-23]: started with pid [8189]
================================================================================REQUIRED process [rgbdslam-23] has died!
process has died [pid 8189, exit code -11, cmd /home/monkeyiq/catkin_ws/devel/lib/rgbdslam/rgbdslam __name:=rgbdslam __log:=/home/monkeyiq/.ros/log/7f7c8758-3a90-11e4-8b1b-d050992ba7ab/rgbdslam-23.log].
log file: /home/monkeyiq/.ros/log/7f7c8758-3a90-11e4-8b1b-d050992ba7ab/rgbdslam-23*.log
Initiating shutdown!
Unfortunately the Logs do not seem to shine light on any specific cause.
I'm not sure if its a good idea to try to gdb the rgbdslam executable directly, but this is what I get when that segvs:
~/catkin_ws/src/rgbdslam_v2/launch$ gdb /home/monkeyiq/catkin_ws/devel/lib/rgbdslam/rgbdslam
...
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff7582a8c in boost::math::lanczos::lanczos_initializer<boost::math::lanczos::lanczos17m64, long double>::init::init() ()
from /usr/lib/libpcl_io.so.1.7
(gdb) bt
#0 0x00007ffff7582a8c in boost::math::lanczos::lanczos_initializer<boost::math::lanczos::lanczos17m64, long double>::init::init() ()
from /usr/lib/libpcl_io.so.1.7
#1 0x00007ffff750d186 in ?? () from /usr/lib/libpcl_io.so.1.7
#2 0x00007ffff7dea13a in call_init (l=<optimised out>, argc=argc@entry=1, argv=argv@entry=0x7fffffffe138, env=env@entry=0x7fffffffe148) at dl-init.c:78
#3 0x00007ffff7dea223 in call_init (env=<optimised out>, argv=<optimised out>, argc=<optimised out>, l=<optimised out>) at dl-init.c:36
#4 _dl_init (main_map=0x7ffff7ffe1c8, argc=1, argv=0x7fffffffe138, env=0x7fffffffe148) at dl-init.c:126
#5 0x00007ffff7ddb30a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
#6 0x0000000000000001 in ?? ()
#7 0x00007fffffffe3d5 in ?? ()
#8 0x0000000000000000 in ?? ()
(gdb) q
Any thoughts or recommendations are greatly appreciated.
Originally posted by monkeyiq on ROS Answers with karma: 46 on 2014-09-12
Post score: 0
Original comments
Comment by monkeyiq on 2014-09-14:
This appears to be a clash with boost and your selected c++ standard during various compiles. Removing the -std=c++0x from catkin_ws/src/rgbdslam_v2/CMakeLists.txt no longer crashes in the same way. Shortly I will verify that the DSLAM as a whole is doing something productive now.
Comment by Noahsark on 2014-10-12:
I have a similar problem. May I ask how did you compile without -std=c++0x ? it seems some components are forced to be compiled under C++11.
|
Hi,
I am trying to transform geometry_msgs::PoseStamped in one frame to other. I am using tf::TransformListener::transformPose here is my code snippet.
tf::TransformListener listener;
geometry_msgs::PoseStamped pose_map;
try{
listener.transformPose("map",pose_world,pose_map); // pose_world is in world frame
}
catch( tf::TransformException ex)
{
ROS_ERROR("transfrom exception : %s",ex.what());
}
i am getting below exception.
[ERROR] [1410527449.677789054, 1410185755.142016150]: transfrom exception : "map" passed to lookupTransform argument target_frame does not exist.
But i can see tf between /world and /map from rosrun tf view_frames and also from rosrun tf tf_echo world map
Edit:
After adding wait for transform i am getting below exception. But /map to /world is publishing at 25hz from launch file. I am using bag file for tf data with --clock and use_sim_time true
[ERROR] [1410539110.856303453, 1410185780.612246601]: transfrompose exception : Lookup would require extrapolation into the past. Requested time 1410185779.600078575 but the earliest data is at time 1410185779.862480216, when looking up transform from frame [world] to frame [map]
Thank you.
Originally posted by bvbdort on ROS Answers with karma: 3034 on 2014-09-12
Post score: 6
Original comments
Comment by Tom Moore on 2014-09-13:
You're running roscore first, then setting use_sim_time, then starting your launch file, then running rosbag play, correct?
Comment by bvbdort on 2014-09-13:
i put use_sim_time param in launch file as first line. First launch file then rosbag play
Comment by Tom Moore on 2014-09-13:
Can you post the relevant parts of your launch file?
Comment by bvbdort on 2014-09-13:
here is launch file
Comment by Tom Moore on 2014-09-13:
And you're certain that the original data (before you bagged it) didn't also have this problem? Try playing back the bag file very slowly (with -r) and watch the message time stamps. If the tf ones seem out of sync, then perhaps your original data (i.e., on the live robot) had this problem as well.
Comment by bvbdort on 2014-09-13:
I didnt try with live data so far, but in rosbag data I can see all the required tf from rostopic echo.
Comment by Tom Moore on 2014-09-13:
But the bag data came from somewhere. The question isn't whether the tranforms exist. It's whether they have the correct time stamps. If there was something wrong when you recorded the data, it won't get better when you replay it.
Comment by tfoote on 2014-09-13:
Does this error happen once or a few times at startup or does it continuously happen?
Comment by bvbdort on 2014-09-14:
@Tom Moore : When i just do roscore + use_sim_time true +rosbag play --clock i can see all the tf
@tfoote : it is happening continuously.
Comment by Tom Moore on 2014-09-14:
Can you share the bag somewhere?
Comment by bvbdort on 2014-09-14:
world to map tf is from launch file. here is bag file zft_world and world in my description are same.
|
Hello,
I am trying to make tf tree from rosbag file and raw topics like /imu/data and /scan without tf topic. At the time when record the bagfile, we don't know the tf jet and now we want to make the tf appropriate tree.
We are doing tf static transform with static_transform_publisher nodes by launch file:
<node pkg="tf" type="static_transform_publisher" name="base_link_imu" args="0.5 0 0.1 0 1.57 0 /base_link /imu 10"/>
<node pkg="tf" type="static_transform_publisher" name="base_footprint" args="0 0 0.2 0 0 0 /base_footprint /base_link 10"/>
<node pkg="tf" type="static_transform_publisher" name="base_link_laser" args="0.5 0 0 0 0 0 /base_link /laser 10"/>
In imu msg topic, the header is the follow:
header:
seq: 49958
stamp:
secs: 1410520339
nsecs: 565862711
frame_id: imu
Frame_id : imu.
The same for the laser data:
header:
seq: 1831
stamp:
secs: 1410520215
nsecs: 426117012
frame_id: laser
angle_min: -0.872664630413
angle_max: 0.872664630413
angle_increment: 0.00436332309619
time_increment: 3.70370362361e-05
scan_time: 0.0533333346248
range_min: 0.0
range_max: 81.0
Notice that both frame publisher are "imu" and "laser".
Now we run the launch file generating the tree but when we want to see in rviz, appear the next msg:
In imu frame:
Transform [sender=unknown_publisher]
Message removed because it is too old (frame=[imu], stamp=[1410520316.775747058])
In laser frame.
Transform [sender=unknown_publisher]
Message removed because it is too old (frame=[laser], stamp=[1410520194.940736519])
The problem is the msg has other timestamp as the tf_publisher.
What is the best strategy to make new tf with old timestamp to use the raw data from the rosbag file?
The aim is watch the robot pose and the map/odometry just with the rosbag sensors. I read hector_mapping can do the thing but need tf in order to make correct odometry.
Originally posted by pmarinplaza on ROS Answers with karma: 330 on 2014-09-12
Post score: 2
|
I have an intel atom tunnel creek processor (E640) for my project and a xtion pro live sensor.And I am planning to implement SLAM. So will the intel atom creek handle the extensive processing ?
Originally posted by Shubham Garg on ROS Answers with karma: 13 on 2014-09-12
Post score: 1
|
I am following this tutorial:
http://wiki.ros.org/lse_roomba_toolbox/Tutorials/Simulating%20a%20Roomba%20on%20Stage
When I enter this command,
rosrun stage stageros roomba_isr_floor0.world
it gives this error
[rosrun] Couldn't find executable named stageros below /opt/ros/hydro/share/stage
Originally posted by tonyParker on ROS Answers with karma: 377 on 2014-09-12
Post score: 2
Original comments
Comment by Akali on 2015-01-12:
I had same problem bro. Did you solve that error?
Comment by marcobecerrap on 2015-07-29:
Does anyone knows how to solve this? I'm having the same error here... I'm trying to run stage with a simple command:
$ rosrun stage_ros stageros ./map01_Room.world
But I get this error:
[rosrun] Couldn't find executable named stageros below /opt/ros/indigo/share/stage_ros
|
Hi,
I have some problems on using command "rosrun uwsim uwsim" to run uwsim following errors are showing:
Jack@Jack:~$ rosrun uwsim uwsim
Starting UWSim...
. Setting localized world: 6.1e-05s
Loading URDF robot...
· robot/GIRONA500/g500_March11.osg: 2.67438s
· robot/ARM5E/ARM5E_part0.osg: 0.082435s
· robot/ARM5E/ARM5E_part1.osg: 0.119529s
· robot/ARM5E/ARM5E_part2.osg: 0.127205s
· robot/ARM5E/ARM5E_part3.osg: 0.439133s
· robot/ARM5E/ARM5E_part4_base.osg: 0.420119s
· robot/ARM5E/ARM5E_part4_jaw1.osg: 0.065359s
· robot/ARM5E/ARM5E_part4_jaw2.osg: 0.058875s
· Linking links.../opt/ros/groovy/lib/uwsim/uwsim: line 20: 25729 Segmentation fault (core dumped) rosrun uwsim uwsim_binary --dataPath ~/.uwsim/data $@
Thank you for your help!
Originally posted by Cong Wang on ROS Answers with karma: 1 on 2014-09-12
Post score: 0
|
Hello,
I was wondering if there is a database of different environment maps for testing robot navigation in ROS. In particular, I'm looking for an open space environment map without narrow corridors and many obstacles.
Thanks.
Originally posted by ROSCMBOT on ROS Answers with karma: 651 on 2014-09-12
Post score: 1
Original comments
Comment by dornhege on 2014-09-13:
What format are you looking for? Just a map, i.e. yaml/ppm?
Comment by ROSCMBOT on 2014-09-13:
Yes yaml/pgm files would be fine
|
Hi All! I am just starting to learn ROS.
So I was following the tutorial on understanding TOPICS (http://wiki.ros.org/ROS/Tutorials/UnderstandingTopics), where I was supposed to run $ rosrun rqt_plot rqt_plot and enter /turtle1/pose/x in GUI, but I keep getting this message in the terminal:
Traceback (most recent call last): File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_plot/plot_widget.py", line 204, in on_topic_edit_textChanged
plottable, message = is_plottable(topic_name) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_plot/plot_widget.py", line 104, in is_plottable
fields, message = get_plot_fields(topic_name) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_plot/plot_widget.py", line 75, in get_plot_fields
field_class = topic_helpers.get_type_class(slot_type) AttributeError: 'module' object has no attribute 'get_type_class'
AttributeError: 'module' object has no attribute 'get_type_class'
I ran roswtf and got this :
Loaded plugin tf.tfwtf
No package or stack in context
================================================================================
Static checks summary:
Found 1 error(s).
ERROR Not all paths in PYTHONPATH [/home/omar/catkin_ws/devel/lib/python2.7/dist-packages:/opt/ros/hydro/lib/python2.7/dist-packages] point to a directory:
* /home/omar/catkin_ws/devel/lib/python2.7/dist-packages
================================================================================
Beginning tests of your ROS graph. These may take awhile...
analyzing graph...
... done analyzing graph
running graph rules...
... done running graph rules
Online checks summary:
No errors or warnings
I can't get any plots described in tutorials, so I was hoping you could help.
Originally posted by kost9 on ROS Answers with karma: 97 on 2014-09-13
Post score: 0
|
I'm having a hard time debugging my Turtlebot because I don't know what a functioning Turtlebot should look like. Now that I've read the source code for turtlebot_calibration I have some better understanding of what to expect, but I'm still not sure if my bot is mechanically broken.
Is it normal for Tbot to stop for long periods of time during the calibration routine?
Is it normal for calibration to take 7-10 minutes, or more?
What are some examples of reasonable values for gyro_scale_correction and odom_angular_scale_correction (for a gyro_measurement_range of 150)? Is a gyro value of 0.3 completely unreasonable? What about 3.4? Should it be much closer to the default?
What is a normal error range for odom and IMU? Even on a good run I might see an odom error of >10% and IMU error >40%.
Is it normal for the bot to drift out of place during calibration? (i.e. it doesn't turn in place perfectly, but drifts a few centimeters over the course of the routine; one wheel must be spinning faster than the other, I guess)
Also, what kind of error is acceptable overall (from the EKF's odom_combined) to be able to run gmapping? If I rotate 360° while watching the output of rosrun tf_echo odom base_footprint, should the yaw delta be within 1-2°? Or can I accept errors of more like 10-20° and still be able to run SLAM?
Here's a video showing my bot doing calibration (I know it over-turns right now so the numbers aren't correct, I just want to show you how it stops for long periods of time making weird noises, e.g. t=0:50): https://www.youtube.com/watch?v=8X8SmUVgJd0
I appreciate any clues you can give, even if you can't answer all of the questions above. We can collaboratively put together an answer here.
Edit:
Seriously? No one is going to even give me a hint of whether they think my video looks normal or not??
Originally posted by Neil Traft on ROS Answers with karma: 205 on 2014-09-14
Post score: 2
Original comments
Comment by Neil Traft on 2014-09-14:
If you are following this question then you might also want to upvote it.
Comment by charkoteow on 2014-09-18:
Not the answer you're looking for but you can always calibrate the Turtlebot manually. Oh and it took me around 5 minutes to auto calibrate mine and the results are bad.
|
I want to simulate any robot on Gazebo and stage for understanding both simulators.. But most of the tutorials/packages I found is for Groovy, fuerte ,Electric of Diamondback. Can I use these ?
Or I install multiple ROs distributions for testing them ?
Originally posted by tonyParker on ROS Answers with karma: 377 on 2014-09-14
Post score: 0
|
Hi,
I have a point cloud which was obtained from Laser scanning. I need to extract depth images from this point cloud. I want to assume random camera position and set up some camera parameters and get images corresponding to these positions. I found some documentation but this isn't what I need : http://wiki.ros.org/pcl_ros/Tutorials/CloudToImage.
There was a discussion which looked similar to mine here : http://www.pcl-users.org/get-a-2d-depth-image-from-the-pointcloud-td2795083.html but the links posted here are not valid.
Can anybody help me by telling how I should get started with this and also if this is possible.
Thanks
Originally posted by Banu Muthukumar on ROS Answers with karma: 1 on 2014-09-14
Post score: 0
|
Hello,
I've set up the navigation stack on my robot. The issue is move_base is publishing the velocity commands on cmd_vel whose message is of type Twist, but my robot is subscribing to a topic called base_velocity whose message is of type TwistStamped. So as I understand, only remapping wouldn't do the work. What can I do to remap topics of different message types.
Thanks
Answer: So I wrote the node below that converts cmd_vel messages to base_velocity,
#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist, TwistStamped
import time
def callback(cmdVelocity):
baseVelocity = TwistStamped()
baseVelocity.twist = cmdVelocity
now = rospy.get_rostime()
baseVelocity.header.stamp.secs = now.secs
baseVelocity.header.stamp.nsecs = now.nsecs
baseVelocityPub = rospy.Publisher('base_velocity', TwistStamped, queue_size=10)
baseVelocityPub.publish(baseVelocity)
def cmd_vel_listener():
rospy.Subscriber("cmd_vel", Twist, callback)
rospy.spin()
if __name__ == '__main__':
rospy.init_node('cmd_vel_listener', anonymous=True)
cmd_vel_listener()
Originally posted by ROSCMBOT on ROS Answers with karma: 651 on 2014-09-14
Post score: 0
|
Dear ROS users,
trying to use
rosrun tf view_frames
I get
Traceback (most recent call last):
File "/opt/ros/fuerte/stacks/geometry/tf/scripts/view_frames", line 43, in <module>
import roslib; roslib.load_manifest(PKG)
File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/__init__.py", line 50, in <module>
from roslib.launcher import load_manifest
File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/launcher.py", line 42, in <module>
import rospkg
ImportError: No module named rospkg
...but I cannot find any errors around my environmental variables.. :(
Originally posted by Rahndall on ROS Answers with karma: 133 on 2014-09-15
Post score: 0
|
Hey,
i have a node that publishes sensor_msgs/PointCloud data and i want to subscribe to this.
My Subscriber looks like this:
distance_sub = nh.subscribe<pcl::PointCloud<pcl::PointXYZ>>("/distance_sensors", 1, &SafetyBelt::distanceCallback, this);
I included the point_cloud.h and the point_types.h headers. But i am not able to compile this Code and i have no idea what the error means:
/opt/ros/indigo/include/ros/subscribe_options.h:111:54: required from ‘void ros::SubscribeOptions::init(const string&, uint32_t, const boost::function<void(const boost::shared_ptr<const M>&)>&, const boost::function<boost::shared_ptr<X>()>&) [with M = pcl::PointCloud<pcl::PointXYZ>; std::string = std::basic_string<char>; uint32_t = unsigned int]’
/opt/ros/indigo/include/ros/node_handle.h:443:5: required from ‘ros::Subscriber ros::NodeHandle::subscribe(const string&, uint32_t, void (T::*)(const boost::shared_ptr<const M>&), T*, const ros::TransportHints&) [with M = pcl::PointCloud<pcl::PointXYZ>; T = SafetyBelt; std::string = std::basic_string<char>; uint32_t = unsigned int]’
/home/daniel/Desktop/robotino/trunk2/trunk/robotino/robotino_teleop/src/SafetyBelt.cpp:16:121: required from here
/opt/ros/indigo/include/ros/message_traits.h:138:31: error: ‘__s_getDataType’ is not a member of ‘pcl::PointCloud<pcl::PointXYZ>’
return M::__s_getDataType().c_str();
^
In file included from /opt/ros/indigo/include/ros/publisher.h:34:0,
from /opt/ros/indigo/include/ros/node_handle.h:32,
from /opt/ros/indigo/include/ros/ros.h:45,
from /home/daniel/Desktop/robotino/trunk2/trunk/robotino/robotino_teleop/include/SafetyBelt.h:1,
from /home/daniel/Desktop/robotino/trunk2/trunk/robotino/robotino_teleop/src/SafetyBelt.cpp:1:
/opt/ros/indigo/include/ros/serialization.h: In instantiation of ‘static void ros::serialization::Serializer<T>::read(Stream&, typename boost::call_traits<T>::reference) [with Stream = ros::serialization::IStream; T = pcl::PointCloud<pcl::PointXYZ>; typename boost::call_traits<T>::reference = pcl::PointCloud<pcl::PointXYZ>&]’:
/opt/ros/indigo/include/ros/serialization.h:163:32: required from ‘void ros::serialization::deserialize(Stream&, T&) [with T = pcl::PointCloud<pcl::PointXYZ>; Stream = ros::serialization::IStream]’
/opt/ros/indigo/include/ros/subscription_callback_helper.h:136:34: required from ‘ros::VoidConstPtr ros::SubscriptionCallbackHelperT<P, Enabled>::deserialize(const ros::SubscriptionCallbackHelperDeserializeParams&) [with P = const boost::shared_ptr<const pcl::PointCloud<pcl::PointXYZ> >&; Enabled = void; ros::VoidConstPtr = boost::shared_ptr<const void>]’
/home/daniel/Desktop/robotino/trunk2/trunk/robotino/robotino_teleop/src/SafetyBelt.cpp:81:1: required from here
/opt/ros/indigo/include/ros/serialization.h:136:5: error: ‘class pcl::PointCloud<pcl::PointXYZ>’ has no member named ‘deserialize’
t.deserialize(stream.getData());
^
In file included from /opt/ros/indigo/include/ros/serialization.h:37:0,
from /opt/ros/indigo/include/ros/publisher.h:34,
from /opt/ros/indigo/include/ros/node_handle.h:32,
from /opt/ros/indigo/include/ros/ros.h:45,
from /home/daniel/Desktop/robotino/trunk2/trunk/robotino/robotino_teleop/include/SafetyBelt.h:1,
from /home/daniel/Desktop/robotino/trunk2/trunk/robotino/robotino_teleop/src/SafetyBelt.cpp:1:
/opt/ros/indigo/include/ros/message_traits.h: In static member function ‘static const char* ros::message_traits::MD5Sum<M>::value() [with M = pcl::PointCloud<pcl::PointXYZ>]’:
/opt/ros/indigo/include/ros/message_traits.h:122:3: warning: control reaches end of non-void function [-Wreturn-type]
}
^
/opt/ros/indigo/include/ros/message_traits.h: In static member function ‘static const char* ros::message_traits::DataType<M>::value() [with M = pcl::PointCloud<pcl::PointXYZ>]’:
/opt/ros/indigo/include/ros/message_traits.h:139:3: warning: control reaches end of non-void function [-Wreturn-type]
}
^
Can anyone tell me how to correctly subscribe to this topic?
Best regards
Daniel
Originally posted by Missing on ROS Answers with karma: 41 on 2014-09-15
Post score: 4
|
hi everyone
Is this line enough in order to use tf package for set a fixed relation between base_link and base_laser frames in my launch file?
<node pkg="tf" type="static_transform_publisher" name="base_to_laser_broadcaster" args="0 0 0 0 0 0 base_link base_laser 100" />
and do i need other nodes or scripts to set /map frames or others especially when we want to use packages like gmapping or amcl??
thanks:))))
Originally posted by mohammad on ROS Answers with karma: 75 on 2014-09-15
Post score: 1
|
Hi everybody,
I have a problem here. I try to control the turtlebot speed based on the IR sensors reading.
Unfortunately, the turtlebot only slow down the object is at it's very right side.
However, the IR sensors shows all reading without any problem.
Here is my code:
#include <ros/ros.h>
#include <roomba_500_series/RoombaIR.h>
#include <geometry_msgs/Twist.h>
#include <ros/rate.h>
#include <iostream>
using namespace std;
ros::Publisher pub_vel;
geometry_msgs::Twist cmdvel;
int reading, condition;
double speed;
//define the irsensorCallback function
void irsensorCallback(const roomba_500_series::RoombaIR::ConstPtr& msg)
{
reading = msg->signal;
condition = msg->state;
cout<<"IR State = "<<reading<<endl;
cout<<"IR Signal = "<<condition<<endl;
if(condition!=0 || reading=>100)
{
cmdvel.linear.x=0.08;
cmdvel.amgular.z=0.0;
}
else
{
cmdvel.linear.x=0.2;
cmdvel.amgular.z=0.0;
}
pub_vel.publish(cmdvel);
}
//ROS node entry point
int main(int argc, char **argv)
{
ros::init(argc, argv, "turtlebot_test");
ros::NodeHandle n;
ros::Subscriber irsensorSubscriber = n.subscribe("/ir_bumper", 1000, irsensorCallback);
ros::Publisher velocityPublisher = n.advertise<geometry_msgs::Twist>("cmd_vel", 1000);
cmdvel.linear.x = 0.0;
cmdvel.angular.z = 0.0;
ros::spin();
return 0;
}
Hopefully somebody could advise me on this.
Thank you.
Originally posted by nadal11 on ROS Answers with karma: 23 on 2014-09-15
Post score: 0
Original comments
Comment by ahendrix on 2014-10-14:
Your code looks pretty reasonable. Are you sure you're interpreting the IR readings from the Roomba correctly?
Comment by nadal11 on 2014-10-15:
Yes I am sure. Because IR readings display all readings from all sensors. However, the turtlebot only slow down if the IR reading from very right side is less than 100. Any idea?
Comment by ahendrix on 2014-10-15:
Is each sensor reading published as a separate message? Is it possible that you're publishing a command for each sensor reading, and only the most recent command is being used?
Comment by nadal11 on 2014-10-17:
yes, each sensor reading published as a separate message. And yes, the sensor reading from very right is the last sensor reading published and only this sensor reading is being used. that's why I wonder how I could make sure all other sensor reading are being used as well.
|
package.xml
<package>
<description> knex_ros </description>
<name> knex_ros </name>
<author>jfstepha</author>
<license>BSD</license>
<url>http://ros.org/wiki/kinex_ros</url>
<version> 0.0.0</version>
<maintainer email="[email protected]">andre</maintainer>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>tf</build_depend>
<build_depend>rospy</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>robot_state_publisher</build_depend>
<build_depend>differential_drive</build_depend>
<run_depend>tf</run_depend>
<run_depend>rospy</run_depend>
<run_depend>std_msgs</run_depend>
<run_depend>robot_state_publisher</run_depend>
<run_depend>differential_drive</run_depend>
</package>
CMakeLists.txt
cmake_minimum_required(VERSION 2.8.3)
project(knex_ros)
find_package(catkin REQUIRED COMPONENTS
rospy
std_msgs
tf
roscpp
robot_state_publisher
differential_drive
)
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES motor_control
# CATKIN_DEPENDS ros_control roscpp rospy std_msgs
# DEPENDS system_lib
)
include_directories(
${catkin_INCLUDE_DIRS}
)
catkin_install_python(PROGRAMS
scripts/knex_arduino_connector.py
scripts/knex_scratch_connector.py
scripts/range_filter.py
scripts/range_to_laser.py
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
After I catkin_make under [ws], I did . [ws]/devel/setup.bash
When I run rospack find, I can find knex_ros, but when I rosrun knex_ros whatever.py, I got
~/catkin_ws/src/knex_ros$ rosrun knex_ros knex_arduino_connector.py
[rosrun] Couldn't find executable named knex_arduino_connector.py below /home/andre/catkin_ws/src/knex_ros
[rosrun] Found the following, but they're either not files,
[rosrun] or not executable:
[rosrun] /home/andre/catkin_ws/src/knex_ros/scripts/knex_arduino_connector.py
I tried tab to show available executable from knex_ros package, it doesn't show anything. Other package under the same workspace work fine.
One weird thing I noticed happening along with not able to find executable is that I have to source devel/setup.bash every time I open a new terminal, and I didn't have to do this before.
update:
I ran catkin_make install and . install/setup.bash, and rosrun can find the executables.
Not sure why only running catkin_make, rosrun works for other package containing python scripts, but not for knex_ros
Originally posted by mugenzebra on ROS Answers with karma: 51 on 2014-09-15
Post score: 3
|
On the Jenkins build farm, my downloads from bitbucket.org always yield the same hash of d41d8cd98f00b204e9800998ecf8427e causing a build/configure failure. I don't think http will work any better because it redirects to an https amazon cloud server.
http://jenkins.ros.org/job/ros-indigo-ueye_binarydeb_trusty_amd64/1/consoleText
http://jenkins.ros.org/job/ros-indigo-ueye_binarydeb_trusty_i386/1/consoleText
From my CMakeLists.txt:
file(DOWNLOAD
https://bitbucket.org/kmhallen/ueye/downloads/uEye_SDK_4_40_amd64.tar.gz
${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_SHARE_DESTINATION}/3rdparty/uEye_SDK_amd64.tar.gz
SHOW_PROGRESS
INACTIVITY_TIMEOUT 60
EXPECTED_MD5 5290609fb3906a3355a6350dd36b2c76
TLS_VERIFY on)
file(DOWNLOAD
https://bitbucket.org/kmhallen/ueye/downloads/uEye_SDK_4_40_i386.tar.gz
${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_SHARE_DESTINATION}/3rdparty/uEye_SDK_i386.tar.gz
SHOW_PROGRESS
INACTIVITY_TIMEOUT 60
EXPECTED_MD5 d9803f2db1604f5a0993c4b62d395a31
TLS_VERIFY on)
From the CMakeLists.txt of velodyne_driver:
catkin_download_test_data(
${PROJECT_NAME}_tests_class.pcap
http://download.ros.org/data/velodyne/class.pcap
DESTINATION ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_SHARE_DESTINATION}/tests
MD5 65808d25772101358a3719b451b3d015)
One solution is to hosts files on download.ros.org like the velodyne_driver and the costmap_2d packages. How can I upload to this hosting service?
Update: Prerelease downloads and builds fine.
http://jenkins.ros.org/job/prerelease-indigo-ueye/1/ARCH_PARAM=amd64,UBUNTU_PARAM=trusty,label=prerelease/console
Originally posted by kmhallen on ROS Answers with karma: 1416 on 2014-09-15
Post score: 1
|
So im trying to migrate over from hydro to indigo, and along with that I am trying to move over my custom simulator packages. Im getting alot fatal errors with trying to run my model with gazebo 2.2 on indigo. But then i realized that the stock Clearpath Husky model that comes from the model database doesnt work either..
Here is the error when I click on the Clearpath model (after gazebo is already open):
[FATAL] [1410814234.648936211]: You must call ros::init() before creating the first NodeHandle
[FATAL] [1410814234.648982364]: BREAKPOINT HIT
file = /tmp/buildd/ros-indigo-roscpp-1.11.9-0trusty-20140904-2000/src/libros/node_handle.cpp
line=151
Not sure what to do, and dont really want to resort to installing hydro from source since i already updated to 14.04 ubuntu.. Anybody having a closing on this?
Originally posted by l0g1x on ROS Answers with karma: 1526 on 2014-09-15
Post score: 0
|
So I have a urdf for my robot, with the tree root as base_link.
However, I am using robot_pose_ekf, which has a hard-coded transform for base_footprint. Assuming a simple square base_footprint, should I make it the root:
base_footprint
base_link ... rest of model
EDIT:
After adding a base_footprint link and joint to base_link:
robot name is: Thumperbot_Simplistic
---------- Successfully Parsed XML ---------------
root Link: base_footprint has 1 child(ren)
child(1): base_link
I still get the following error:
Node: /robot_pose_ekf
Time: 12:35:35.441279569 (2014-09-16)
Severity: Debug
Published Topics: /robot_pose_ekf/odom_combined, /rosout, /tf
Could not transform imu message from base_link to base_footprint. Imu will not be activated yet.
Location:
/tmp/buildd/ros-indigo-robot-pose-ekf-1.11.11-0trusty-20140805-0105/src/odom_estimation_node.cpp:OdomEstimationNode::imuCallback:234
----------------------------------------------------------------------------------------------------
Perhaps it is not receiving my urdf that have in the launchfile? Here is the launchfile:
<launch>
<param name="robot_description" textfile="$(find thumperbot_description)/urdf/thumperbot_description.urdf" />
<node name="gps_conv" pkg="gps_common" type="utm_odometry_node">
<remap from="odom" to="vo" />
<remap from="fix" to="/gps/fix" />
<param name="rot_covariance" value="99999" />
<param name="frame_id" value="base_link" />
</node>
<node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_pose_ekf">
<rosparam>
odom_used: false
imu_used: true
vo_used: true
debug: true
self_dignose: true
</rosparam>
</node>
<include file="$(find piksi_driver)/launch/piksi_driver.launch" />
<include file="$(find razor_imu)/launch/razor-pub.launch"/>
</launch>
Here razor_imu and piksi_driver are drivers for the razor imu sensor and the piksi gps sensor. Both sensors output their data correctly.
Originally posted by jackcviers on ROS Answers with karma: 207 on 2014-09-15
Post score: 1
|
Hi all,
Unfortunately I am really struggling with implementing MoveIt! on my custom hardware.
Basically I am stuck in connecting to an Action Client.
I have been successful in using a driver package to control my motor drivers (RoboClaw).
link from @bcharrow
Unfortunately in MoveIt am always greeted with:
[INFO] [1410916361.912676781]: MoveitSimpleControllerManager: Waiting for /full_ctrl/joint_trajectory_action to come up
[ERROR] [1410916366.912904732]: MoveitSimpleControllerManager: Action client not connected: /full_ctrl/joint_trajectory_action
[ INFO] [1410916371.938914542]: MoveitSimpleControllerManager: Waiting for /gripper_ctrl/joint_trajectory_action to come up
[ INFO] [1410916376.939103684]: MoveitSimpleControllerManager: Waiting for /gripper_ctrl/joint_trajectory_action to come up
[ERROR] [1410916381.939338320]: MoveitSimpleControllerManager: Action client not connected: /gripper_ctrl/joint_trajectory_action
[ INFO] [1410916381.957750506]: Returned 0 controllers in list
[ INFO] [1410916381.963234975]: Trajectory execution is managing controllers
My Action Client is based on this:
link
Can anyone offer more of a step by step instruction on connecting my robot to MoveIt, i haven't found any such tutorial or etc such as:
link1
link2
Cheers,
Chris.
Originally posted by anonymous8676 on ROS Answers with karma: 327 on 2014-09-15
Post score: 3
|
Hello, I am using 64E_S2 for a year.
I used two commands to get 3d point-clouds from the Lidar throughout ROS.
The two commands are (1) rosrun velodyne_driver velodyne_node _model:=64E_S2 _rpm:=0600 and (2) rosrun nodelet nodelet standalone velodyne_pointcloud/CloudNodelet transform_node _calibration:=db.yaml.
The file, db.yaml, is generated by using "rosrun velodyne_pointcloud gen_calibration.py db.xml" where db.xml file is provided with the Lidar. Actually, it worked well under ubuntu 12.04 with ROS-hydro.
The problem is here.
I upgrade Ubuntu-version to 14.04 and ROS-indigo, and installed ros-indigo-velodyne. However, when I tried to same commands with same 'db.yaml' file, I got error messages like below.
YAML Exception: yaml-cpp: error at line 0, column 0: bad conversion
and
[ERROR] [1410851740.164624564]: Unable to open calibration file: db.yaml
I thought that I have to generate db.yaml again, but it still not work. In addition, I would like to check what the problem is among the generated file 'db.yaml' and installed velodyne driver. So, I tried roslaunch velodyne_pointcloud 32e_points.launch because it used the yaml file which is provided from ROS. Interestingly, it is working well loading the calibration file without error.
Therefore, I just guess that I should have to generate yaml file differently not following general way.
However, one strange thing is that generated file and provided file are following same format.
I think that it is better to ask here about this problem.
Originally posted by teawonHan on ROS Answers with karma: 1 on 2014-09-16
Post score: 0
|
Hi All,
I am having a very frustrating problem. I have a package pac_industrial_robot_driver that uses messages declared in another package ros_opto22. However, I cannot get the dependency recognised by the pac_industrial_robot_driver CMakeLists.txt file.
The error message:
[ 31%] In file included from /home/controller/catkin_ws/src/pac_industrial_robot_driver/lib/PacIndustrialDriver.cpp:8:0:
/home/controller/catkin_ws/src/pac_industrial_robot_driver/include/PacIndustrialDriver.hpp:32:38: fatal error: ros_opto22/valve_command.h: No such file or directory
Clearly it is not finding the include file.
# Create executables and add dependencies.
foreach(p ${ALL_EXECS})
add_executable(${p} ${${p}_SRC})
add_dependencies(${p} ${PROJECT_NAME}_generate_messages_cpp ${catkin_EXPORTED_TARGETS} ${${PROJECT_NAME}_EXPORTED_TARGETS} ros_opto22_EXPORTED_TARGETS ros_opto22_gencpp ros_opto22_generate_messages_cpp)
target_link_libraries(${p} ${ALL_LIBS} ${catkin_LIBRARIES} industrial_robot_client simple_message industrial_utils)
endforeach(p)
Note the inclusion of ros_opto22_gencpp and ros_opto22_generate_messages_cpp.
I also have the following earlier on in the CMakeLists.txt file:
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES pac_industrial_robot_driver
CATKIN_DEPENDS ros_opto22
# DEPENDS system_lib
)
and
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(include ${catkin_INCLUDE_DIRS})
as well as a having ros_opto22 listed under the find_package call.
I am very frustrated with this, not in the least because there does not seem to be a single definitive guide to solving this problem. What is the best way to go about solving this problem? I can run catkin_make twice but that only masks the problem. I want catkin_make to run properly first time every time even after deleting everything in the build and devel directories under my catkin workspace.
Kind Regards
Bart
Originally posted by bjem85 on ROS Answers with karma: 163 on 2014-09-16
Post score: 1
Original comments
Comment by fherrero on 2014-09-16:
I think what you need is generate_messages(DEPENDENCIES ros_opto22)
Comment by paulbovbel on 2014-09-16:
I believe generate_messages is not for using message files from another package.
Comment by fherrero on 2014-09-17:
True, my mistake. I meant add_dependencies(${PROJECT_NAME} <msg_package_name>_cpp). We use this in order to ensure the necessary messages are compiled before the current package.
|
Hello,
during the past months I have been working on gettings results from a stereo-cam system
using ROS-package viso2_ros in a moving vehicle. So far so good, the egomotion estimation from visual
odometry really works well as can be seen here:
https://docs.google.com/document/d/1lsqHfEHf7g4K1UC0KOJIlh22V57SYQLA5EkSOBJr0fg/edit
I now have wheel_odometry, visual_odometry, IMU and GPS-data in ASCII-files ready to be processed and filtered (also have a reference trajectory giving the "true" position). Here is the data:
https://drive.google.com/file/d/0B1b1mlDmiL4wQm9DdVAzX0lWYmc/edit?usp=sharing
I can filter all the data with the package "robot_pose_ekf".
The disadvantage is that the package uses the orientation from the IMU rather than
angular rates and acceleration. This orientation is drifting and not accurate.
I was wondering if you could give me tips how to build a filter which fuses all
the described data. If possible the design of the filter should not be that complicated
since I donot have that much knowledge about EKFs. Also any good literature recommendation is appreciated.
Hector
Originally posted by mister_kay on ROS Answers with karma: 238 on 2014-09-16
Post score: 0
Original comments
Comment by karry3775 on 2020-04-06:
Hi I am also using viso2_ros inside gazebo to get stereo visual odometry. I am facing issues with translation estimates. When my robot is standstill, even then there is some forward motion estimate given by the package. Did you ever face these issues ? Thanks
|
Hi All,
I am following through the Moveit! Industrial Robot Tutorial and have got stuck at Part 3.1.2. I have copied the template launch file and minimally modified it for my own ends, but I have some questions:
There is a part of the <robot>_moveit_config moveit_planning_execution.launch that deals with the 'real' robot interface. Is this a node I need to write myself? I already have a means of publishing the joint angles of the robot to a sensor_msgs/JointState topic. My robot uses hydraulic rams so the joint angles are converted to ram lengths which are sent to a PID controller that takes in a sensor_msgs/JointState error and converts it to an effort that is then relayed to the robot valves. I'm a little lost about what sort of node I need to write to accept commands from the move_group node. As a starting point, I would like to have a planned trajectory as an input.
Here is the real robot-specific part of the launch file:
The other problem I have is that the inverse kinematics don't seem to work properly. While I can drag the arm around, I cannot plan a path from one point to another. RViz says there is no planning library loaded and on the console I get the following message:
[ INFO] [1410858518.526968209]: Constructing new MoveGroup connection for group 'xyz_control'
[ERROR] [1410858548.853118450]: Unable to connect to move_group action server within allotted time (2)
Here is my RViz screen as I see it. Note that the tutorial implies I should be able to drag the robot around and plan paths with it. However I cannot plan paths with it.
And here is my ROS graph from rqt_graph:
The lack of planning library could be an issue with changing from the 'fake' controller manager to the 'simple' controller manager. Is this an issue? Here's a diff of controllers.yaml between where RViz was able to find a planning library and when it wasn't.
@@ -1,9 +1,9 @@
<launch>
<!-- Set the param that trajectory_execution_manager needs to find the controller plugin -->
- <param name="moveit_controller_manager" value="moveit_fake_controller_manager/MoveItFakeControllerManager"/>
+ <param name="moveit_controller_manager" value="moveit_simple_controller_manager/MoveitSimpleControllerManager"/>
<!-- The rest of the params are specific to this plugin -->
- <rosparam file="$(find hyd_sys_complete_sldasm_moveit_config)/config/fake_controllers.yaml"/>
+ <rosparam file="$(find hyd_sys_complete_sldasm_moveit_config)/config/controllers.yaml"/>
</launch>
Note there is also an IKFast inverse kinematics plugin I have generated that has to fit into the mix somewhere. What is the best way to proceed? I want to be able to plan paths of the robot using Section 3 of the tutorial. There are a few pieces to this puzzle and I haven't quite worked out how they all go together at this stage.
Kind Regards
Bart
Originally posted by bjem85 on ROS Answers with karma: 163 on 2014-09-16
Post score: 0
|
Hi all,
I want to use transparent_object to estimate the pose of my object. And the sample program needs camera.yml as input.
%YAML:1.0
camera:
K: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 525., 0., 3.2050000000000000e+02, 0., 525.,
2.4050000000000000e+02, 0., 0., 1. ]
D: !!opencv-matrix
rows: 5
cols: 1
dt: f
data: [ 0., 0., 0., 0., 0. ]
width: 640
height: 480
pose:
rvec: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [ 0., 0., 0. ]
tvec: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [ 0., 0., 0. ]
I've found that this file is for camera calibration,
http://docs.opencv.org/trunk/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
http://maztories.blogspot.tw/2013/07/camera-calibration-with-opencv.html
I don't know the relationship between this file and PR2. If I want to use PR2's kinect to get image, do I need to adjust the content of this file? How do I adjust it?
Thanks in advance.
Originally posted by Po-Jen Lai on ROS Answers with karma: 1371 on 2014-09-16
Post score: 0
|
After running
$ rosrun rqt_graph rqt_graph
i get the following error msg:
PluginHandlerDirect._restore_settings() plugin "rqt_graph/RosGraph#0" raised an exception:
Traceback (most recent call last):
File "/opt/ros/indigo/lib/python2.7/dist-packages/qt_gui/plugin_handler_direct.py", line 116, in _restore_settings
self._plugin.restore_settings(plugin_settings_plugin, instance_settings_plugin)
File "/opt/ros/indigo/lib/python2.7/dist-packages/rqt_graph/ros_graph.py", line 202, in restore_settings
self._refresh_rosgraph()
File "/opt/ros/indigo/lib/python2.7/dist-packages/rqt_graph/ros_graph.py", line 226, in _refresh_rosgraph
self._update_graph_view(self._generate_dotcode())
File "/opt/ros/indigo/lib/python2.7/dist-packages/rqt_graph/ros_graph.py", line 259, in _update_graph_view
self._redraw_graph_view()
File "/opt/ros/indigo/lib/python2.7/dist-packages/rqt_graph/ros_graph.py", line 292, in _redraw_graph_view
same_label_siblings=True)
File "/opt/ros/indigo/lib/python2.7/dist-packages/qt_dotgraph/dot_to_qt.py", line 254, in dotcode_to_qt_items
subgraph_nodeitem = self.getNodeItemForSubgraph(subgraph, highlight_level)
File "/opt/ros/indigo/lib/python2.7/dist-packages/qt_dotgraph/dot_to_qt.py", line 83, in getNodeItemForSubgraph
bb = subgraph.attr['bb'].strip('"').split(',')
KeyError: 'bb'
Originally posted by Tomaszz on ROS Answers with karma: 33 on 2014-09-16
Post score: 0
Original comments
Comment by Dirk Thomas on 2014-09-18:
What version of these two packages do you use: rqt_graph and qt_dotgraph?
Also please provide more information since you are likely doing more than what you described. E.g. which packages did you try to visualize in the graph?
Comment by Tomaszz on 2014-09-19:
Well, I'm not sure what do you mean by version of rqt_graph. I use ROS Indigo. Problems with rqt_graph started when using VREP together with ROS. For the first time it crashed when I played the simulation on VREP. Officially VREP doesn't support ROS Indigo. Now each rqt_graph run fails.
Comment by Dirk Thomas on 2014-09-19:
With "version" I mean the version of the Debian package you have installed. Please run "dpkg -l | grep rqt-graph" and "dpkg -l | grep qt-dotgraph" and post the result. My second question was: which packages did you try to visualize in the graph?
Comment by Tomaszz on 2014-09-20:
1/ ros-indigo-rqt-graph 0.3.9-0trusty-20140905-0110-+0000 amd64
ros-indigo-qt-dotgraph 0.2.26-0trusty-20140819-0222-+0000 amd64
2/ i tried to visualize vrep tutorials packages "ros_bubble_rob", "vrep_joy" etc
actually now it works fine :P I have restarted my laptop
Comment by Robopija on 2014-09-26:
I get exactly the same error message. I have ros-indigo-rqt-graph 0.3.9-0trusty-20140905-0309-+0000 i386 and ros-indigo-qt-dotgraph 0.2.26-0trusty-20140819-0326-+0000 i386, i don't use VREP. This happens even if no ros nodes are running, just a roscore.
Comment by VinceDietrich on 2015-02-19:
I have the same problem. Restarting Ubuntu helps at first, but error came back. Installing ros-indigo-qt-gui-app and ros-indigo-qt-gui-core did not help for me so far.
ros-indigo-rqt-graph 0.3.10-0trusty-20141230-0525-+0000 amd64
ros-indigo-qt-dotgraph 0.2.26-0trusty-20141230-0049-+0000 amd64
Comment by Dirk Thomas on 2015-02-19:
If anybody could come up with a minimal example to reproduce the issue it might be feasible to fix the problem.
Comment by Will Chamberlain on 2015-02-26:
Hello: I get the same problem. My versions are
ii ros-indigo-rqt-graph 0.3.10-0trusty-20141230-0525-+0000 amd64
ii ros-indigo-qt-dotgraph 0.2.26-0trusty-20141230-0049-+0000 amd64
and I am trying to use rqt_graph as on A Gentle Introduction to ROS (J.M. O'Kane) p24
Comment by Will Chamberlain on 2015-02-26:
(cont): rosnode list
lists
/rosout
/teleop_turtle
/turtlesim
Workaround/clue: I just found that unticking 'Hide Debug' clears the error and displays the graph. Ticking 'Hide Debug' gives the error message again.
Comment by gocarlos on 2015-03-11:
happen here too...
nothing open and still crashing. only two very small packages in the WS
Comment by silverArgon on 2015-04-15:
Hi, im having the same problem. Tried both install ros-indigo-qt-gui-app and ros-indigo-qt-gui-core and restarting, but with no result. Any guess to what it could be?
Comment by Dirk Thomas on 2015-04-23:
As before: if anybody could come up with a minimal example (using ROS packages) to reproduce the issue it might be feasible to find the source of the problem.
|
Hello,
So, I have been trying for a moment to compile a catkin package written in C++ which would use the sound_play package API, but it seems never to find the header <sound_play/sound_play.h>, even though the sound_play package is installed and works normally through rosrun. I have tried to use rosmake in the sound_play package, and can see that the header is indeed in the right place, but still catkin can't find it. Maybe I am forgetting to add an include somewhere?
This is ROS groovy I am using.
Thanks for any suggestions.
Originally posted by hiro64 on ROS Answers with karma: 58 on 2014-09-16
Post score: 0
Original comments
Comment by hiro64 on 2014-09-18:
If anyone gets into this situation, I have solved it by taking the headers (sound_play.h and SoundRequest.h) directly from the sound_play package and using them as part of the src of my package. This way, one is able to use their functions and the SoundClient class.
Comment by joq on 2014-09-18:
That is probably not a solution we should recommend.
|
I'm trying to follow this tutorial - http://wiki.ros.org/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages
to stream images to a node.
After I added the node see_image.cpp, I'm adding an executable by name image_viewer.
To be clearer, my CMakeLists.txt looks like this -
cmake_minimum_required(VERSION 2.8.3)
project(visual_odometry)
find_package(catkin REQUIRED COMPONENTS
ardrone_autonomy
roscpp
rospy
sensor_msgs
cv_bridge
std_msgs
image_transport
)
include_directories(
${catkin_INCLUDE_DIRS}
)
add_executable(subscriber_test src/subscriber.cpp)
target_link_libraries(subscriber_test
${catkin_LIBRARIES}
)
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(image_viewer src/see_image.cpp)
target_link_libraries(image_viewer ${OpenCV_LIBRARIES})
And my package.xml looks like this:
<package>
<name>visual_odometry</name>
<version>0.0.0</version>
<description>The visual_odometry package</description>
<maintainer email="[email protected]">voladoddi</maintainer>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>ardrone_autonomy</build_depend>
<build_depend>roscpp</build_depend>
<build_depend>rospy</build_depend>
<!--adding build_depend and run_depend according to page http://wiki.ros.org/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages Example 1.4 - ROS node-->
<build_depend>sensor_msgs</build_depend>
<build_depend>cv_bridge</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>image_transport</build_depend>
<run_depend>ardrone_autonomy</run_depend>
<run_depend>roscpp</run_depend>
<run_depend>rospy</run_depend>
<!--run dependencies for the 4 build dependencies added above-->
<run_depend>sensor_msgs</run_depend>
<run_depend>cv_bridge</run_depend>
<run_depend>std_msgs</run_depend>
<run_depend>image_transport</run_depend>
<!-- The export tag contains other, unspecified, tags -->
<export>
<!-- You can specify that this package is a metapackage here: -->
<!-- <metapackage/> -->
<!-- Other tools can request additional information be placed here -->
</export>
</package>
After all this, I try to build using catkin_make and I'm getting the following errors:
Linking CXX executable /home/voladoddi/catkin_ws/devel/lib/visual_odometry/image_viewer
[ 5%] Performing install step for 'ardronelib'
make[3]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule.
[ 6%] Completed 'ardronelib'
[ 8%] Built target ardronelib
[ 9%] Built target subscriber_test
CMakeFiles/image_viewer.dir/src/see_image.cpp.o: In function `main':
see_image.cpp:(.text+0x55): undefined reference to `ros::init(int&, char**, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int)'
see_image.cpp:(.text+0x81): undefined reference to `ros::spin()'
CMakeFiles/image_viewer.dir/src/see_image.cpp.o: In function `image_transport::TransportHints::TransportHints(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ros::TransportHints const&, ros::NodeHandle const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)':
see_image.cpp:(.text._ZN15image_transport14TransportHintsC2ERKSsRKN3ros14TransportHintsERKNS3_10NodeHandleES2_[_ZN15image_transport14TransportHintsC5ERKSsRKN3ros14TransportHintsERKNS3_10NodeHandleES2_]+0x53): undefined reference to `ros::NodeHandle::NodeHandle(ros::NodeHandle const&)'
see_image.cpp:(.text._ZN15image_transport14TransportHintsC2ERKSsRKN3ros14TransportHintsERKNS3_10NodeHandleES2_[_ZN15image_transport14TransportHintsC5ERKSsRKN3ros14TransportHintsERKNS3_10NodeHandleES2_]+0x84): undefined reference to `ros::NodeHandle::~NodeHandle()'
CMakeFiles/image_viewer.dir/src/see_image.cpp.o: In function `image_transport::TransportHints::~TransportHints()':
see_image.cpp:(.text._ZN15image_transport14TransportHintsD2Ev[_ZN15image_transport14TransportHintsD5Ev]+0x19): undefined reference to `ros::NodeHandle::~NodeHandle()'
CMakeFiles/image_viewer.dir/src/see_image.cpp.o: In function `ImageConverter::ImageConverter()':
see_image.cpp:(.text._ZN14ImageConverterC2Ev[_ZN14ImageConverterC5Ev]+0x47): undefined reference to `ros::NodeHandle::NodeHandle(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::map<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&)'
see_image.cpp:(.text._ZN14ImageConverterC2Ev[_ZN14ImageConverterC5Ev]+0x82): undefined reference to `image_transport::ImageTransport::ImageTransport(ros::NodeHandle const&)'
see_image.cpp:(.text._ZN14ImageConverterC2Ev[_ZN14ImageConverterC5Ev]+0x11a): undefined reference to `ros::NodeHandle::NodeHandle(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::map<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&)'
see_image.cpp:(.text._ZN14ImageConverterC2Ev[_ZN14ImageConverterC5Ev]+0x291): undefined reference to `ros::NodeHandle::~NodeHandle()'
see_image.cpp:(.text._ZN14ImageConverterC2Ev[_ZN14ImageConverterC5Ev]+0x31d): undefined reference to `image_transport::ImageTransport::advertise(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, bool)'
see_image.cpp:(.text._ZN14ImageConverterC2Ev[_ZN14ImageConverterC5Ev]+0x42e): undefined reference to `ros::NodeHandle::~NodeHandle()'
see_image.cpp:(.text._ZN14ImageConverterC2Ev[_ZN14ImageConverterC5Ev]+0x505): undefined reference to `image_transport::ImageTransport::~ImageTransport()'
see_image.cpp:(.text._ZN14ImageConverterC2Ev[_ZN14ImageConverterC5Ev]+0x519): undefined reference to `ros::NodeHandle::~NodeHandle()'
CMakeFiles/image_viewer.dir/src/see_image.cpp.o: In function `ImageConverter::~ImageConverter()':
see_image.cpp:(.text._ZN14ImageConverterD2Ev[_ZN14ImageConverterD5Ev]+0x49): undefined reference to `image_transport::ImageTransport::~ImageTransport()'
see_image.cpp:(.text._ZN14ImageConverterD2Ev[_ZN14ImageConverterD5Ev]+0x55): undefined reference to `ros::NodeHandle::~NodeHandle()'
see_image.cpp:(.text._ZN14ImageConverterD2Ev[_ZN14ImageConverterD5Ev]+0x9f): undefined reference to `image_transport::ImageTransport::~ImageTransport()'
see_image.cpp:(.text._ZN14ImageConverterD2Ev[_ZN14ImageConverterD5Ev]+0xb0): undefined reference to `ros::NodeHandle::~NodeHandle()'
CMakeFiles/image_viewer.dir/src/see_image.cpp.o: In function `ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)':
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x3f): undefined reference to `cv_bridge::toCvCopy(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)'
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x18c): undefined reference to `cv_bridge::CvImage::toImageMsg() const'
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x1bf): undefined reference to `image_transport::Publisher::publish(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&) const'
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x259): undefined reference to `ros::console::g_initialized'
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x269): undefined reference to `ros::console::initialize()'
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x2b2): undefined reference to `ros::console::initializeLogLocation(ros::console::LogLocation*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ros::console::levels::Level)'
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x2ed): undefined reference to `ros::console::setLogLocationLevel(ros::console::LogLocation*, ros::console::levels::Level)'
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x2f7): undefined reference to `ros::console::checkLogLocationEnabled(ros::console::LogLocation*)'
see_image.cpp:(.text._ZN14ImageConverter7imageCbERKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEE[ImageConverter::imageCb(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)]+0x35b): undefined reference to `ros::console::print(ros::console::FilterBase*, void*, ros::console::levels::Level, char const*, int, char const*, char const*, ...)'
CMakeFiles/image_viewer.dir/src/see_image.cpp.o: In function `void ros::NodeHandle::param<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> >&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const':
see_image.cpp:(.text._ZNK3ros10NodeHandle5paramISsEEvRKSsRT_RKS4_[void ros::NodeHandle::param<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> >&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const]+0x27): undefined reference to `ros::NodeHandle::hasParam(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const'
see_image.cpp:(.text._ZNK3ros10NodeHandle5paramISsEEvRKSsRT_RKS4_[void ros::NodeHandle::param<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> >&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const]+0x42): undefined reference to `ros::NodeHandle::getParam(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> >&) const'
CMakeFiles/image_viewer.dir/src/see_image.cpp.o: In function `image_transport::Subscriber image_transport::ImageTransport::subscribe<ImageConverter>(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, void (ImageConverter::*)(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&), ImageConverter*, image_transport::TransportHints const&)':
see_image.cpp:(.text._ZN15image_transport14ImageTransport9subscribeI14ImageConverterEENS_10SubscriberERKSsjMT_FvRKN5boost10shared_ptrIKN11sensor_msgs6Image_ISaIvEEEEEEPS6_RKNS_14TransportHintsE[image_transport::Subscriber image_transport::ImageTransport::subscribe<ImageConverter>(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, void (ImageConverter::*)(boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&), ImageConverter*, image_transport::TransportHints const&)]+0xaa): undefined reference to `image_transport::ImageTransport::subscribe(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned int, boost::function<void (boost::shared_ptr<sensor_msgs::Image_<std::allocator<void> > const> const&)> const&, boost::shared_ptr<void> const&, image_transport::TransportHints const&)'
collect2: ld returned 1 exit status
make[2]: *** [/home/voladoddi/catkin_ws/devel/lib/visual_odometry/image_viewer] Error 1
make[1]: *** [visual_odometry/CMakeFiles/image_viewer.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 39%] Built target ardrone_autonomy_generate_messages_py
make: *** [all] Error 2
Invoking "make" failed
I have a feeling this has to do with incorrect declarations / missing declarations in package.xml and CMakeLists.txt.
NOTE: the errors before in the first version of the code were because there were line numbers in my code from the tutorial.
EDIT: the above issue is solved.
CPP file is as below:
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
static const std::string OPENCV_WINDOW = "Image window";
class ImageConverter
{
ros::NodeHandle nh_;
image_transport::ImageTransport it_;
image_transport::Subscriber image_sub_;
image_transport::Publisher image_pub_;
public:
ImageConverter()
: it_(nh_)
{
// Subscrive to input video feed and publish output video feed
image_sub_ = it_.subscribe("/ardrone/image_raw", 1,
&ImageConverter::imageCb, this);
image_pub_ = it_.advertise("/image_converter/output_video", 1);
cv::namedWindow(OPENCV_WINDOW);
}
~ImageConverter()
{
cv::destroyWindow(OPENCV_WINDOW);
}
void imageCb(const sensor_msgs::ImageConstPtr& msg)
{
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
// Draw an example circle on the video stream
if (cv_ptr->image.rows > 60 && cv_ptr->image.cols > 60)
cv::circle(cv_ptr->image, cv::Point(50, 50), 10, CV_RGB(255,0,0));
// Update GUI Window
cv::imshow(OPENCV_WINDOW, cv_ptr->image);
cv::waitKey(3);
// Output modified video stream
image_pub_.publish(cv_ptr->toImageMsg());
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "image_converter");
ImageConverter ic;
ros::spin();
return 0;
}
Originally posted by voladoddi on ROS Answers with karma: 87 on 2014-09-16
Post score: 1
Original comments
Comment by bvbdort on 2014-09-16:
can you post your .cpp file
Comment by voladoddi on 2014-09-16:
I needed to add a target library for the new executable on catkin_libraries. worked. Thank you. However, still not able to see the images.
Do have a look at the .cpp file.
|
Hi guys,
This is my first post on this forum!
I have been searching everywhere on the web and especially on this forum without finding any answer. Please forgive me if the answer was there and that I missed it!
I am trying to display an image from my xtion live connected on my beagle bone black to a rviz instance running on a virtual machine.
I am running ROS indigo on ubuntu 14.04 on both the virtual machine and the bbb. I followed the ros installation tutorials for ubuntu and ubuntu ARM.
roscore is running on the virtual machine
rviz is running on the virtual machine as well
I set the ROS_MASTER_URI variable to the IP of the virtual machine on my bbb
When I run "roslaunch openni2_launch openni2.launch" on the bbb I get the following log :
Device "1d27/0601@1/3" with serial number "1208250036" connected
camera/driver-2 process has finished cleanly
The log file states :
Bond broken, exiting
I am able to open a depth stream using mjpeg_server and using "rosrun openni2_camera openni2_camera_node" but even though the IR sensor lights up I get no image either in rviz or on mjpeg_server
Can anyone help me ?
Full output of "roslaunch openni2_launch openni2.launch --screen" can be found here : http://pastebin.com/Q38VpuVq
Thanks
Regards,
Pouc
Originally posted by pouc on ROS Answers with karma: 16 on 2014-09-16
Post score: 0
Original comments
Comment by ahendrix on 2014-09-16:
It looks like the openni2 driver is crashing. Can you edit your question to include the full output from roslaunch openni2_launch openni2.launch --screen please?
Comment by pouc on 2014-09-16:
Hi ahendrix. Thanks for your answer. I have added the requested command output in the question as a pastebin link.
Comment by ahendrix on 2014-09-17:
I haven't seen anything like that before. It looks like some part of the nodelet infrastructure is crashing, but I'm not really sure how to go about troubleshooting it.
|
When I call catkin_make like this
catkin_make -DPYTHON_EXECUTABLE=/usr/bin/python2 -DPYTHON_INCLUDE_DIR=/usr/include/python2.7 -DPYTHON_LIBRARY=/usr/lib/libpython2.7.so
I see this in the output
-- Using PYTHON_EXECUTABLE: /usr/bin/python2
-- Using Python nosetests: /usr/bin/nosetests-3.4
-- Using empy: /usr/lib/python3.4/site-packages/em.py
Naturally my build process fails because the python 2 executable cannot interpret these versions. How do I force catkin to use the python 2.7 versions?
Originally posted by clauniel on ROS Answers with karma: 23 on 2014-09-16
Post score: 0
|
Hi,
I'm working on a path planner that incorporates dynamic obstacle avoidance. I was wondering if there is a visualizer or any tool available for developing path planners?
Some of the requirements would be:
To visualize a 2D (or even 3D) space
Include static and dynamic obstacles programmatically
Visualize robot position and goal
Visualize the progress of the path planner as it searches the space (whichever path planner is used, eg: A*, RRT etc.)
Something similar to this: http://qiao.github.io/PathFinding.js/visual
Thank you.
Originally posted by 2ROS0 on ROS Answers with karma: 1133 on 2014-09-16
Post score: 0
|
Hi, I had some doubts about how catkin_make works, and was hoping someone could explain it all to me. So, i have gone through all the basic ros tutorials, and have the begginer_tutorials package that they use. I am able to build the package and run it using ros run. This works with my devel/setup.bash sourced.
After this point I was trying to use catkin_make install to make the installation directory. I used the cmake flags to specify an install directory inside my catkin workspace and everything ran correctly. After this my workspace had the folders src, devel, build, and install. However, upon further inspection I noticed that the install folder did not contain any binary files, only the headers. I also noticed that if I use the setup.bash in the install folder none of my packages can be found; however, all my packages show up when I source /devel/setup.bash.
So, I was wondering why this is, and how it works. Why does install contain no binaries? Is it supposed to be this way? Are they just referenced by devel? Why doesn't my setup.bash in install work? If someone could provide some insight it would be greatly appreciated.
Originally posted by pachuc on ROS Answers with karma: 13 on 2014-09-16
Post score: 0
|
Hi guys.
since I put the inertia value into the base link I got all the time the following error:
The root link base_link has an inertia
specified in the URDF, but KDL does
not support a root link with an
inertia. As a workaround, you can add
an extra dummy link to your URDF.
Now. I would create another link as suggested by the message, BUT in that case I must define a joint.
Since the base_link can move on 6DOF and the joint "floating" has been deprecated and not usable, I dont know how which constrains shoudl I use for the joint.
Any idea on how to get rid of the message?
Originally posted by Andromeda on ROS Answers with karma: 893 on 2014-09-16
Post score: 2
Original comments
Comment by filipposanfilippo on 2016-04-19:
I am having the same issue. Could you please post or better explain your solution?
Comment by Andromeda on 2016-04-19:
Create a frame 'odom' and a frame 'base_link' or whatever you like. Then joint them toghether with a fixed joint. The inertia must be put on the child frame, in this case 'base_link' otherwise you get the error.
Comment by filipposanfilippo on 2016-04-19:
This is the message that I get when running roslaunch rrbot_control rrbot_control.launch:
The root link link1 has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF.
Any ideas?
Comment by Andromeda on 2016-04-19:
After link1 create a dummy frame which has mass and inertia properties. You fix them together. You move the inertia from link1 to link_dummy
Comment by filipposanfilippo on 2016-04-20:
Thank you, Andromeda! I have done the suggested modification. However, now my link1 has got a gray color (the one set is orange). How can I fix this? Also in the robot tree, linl1 is not appearing (it only shows link_dummy, link2, ...). Is this normal?
|
I am trying to run slam_gmapping on this bag file I collected with my robot and I am getting ridiculously bad results -- so ridiculously bad that my first inclination is to suspect that I am missing something fundamental in my setup.
Here is my setup:
I have a SICK LMS100 mounted upside down forward (by 0.267m) and to the right (by .256m) of the center of my robot. We run custom software on our robot, so I have written a translator that listens to the telemetry published by the robot and publishes TF frames according to my best understanding of how one should do that -- basically, I send transforms with a frame ID of "odom", a child ID of "base_link", and with an x, y, and rotation extracted from the x, y, and heading information (relative to the location at which I booted the robot).
Our LMS100 is configured to report scans over a 190 degree range (rather than the 270 degree range of which it is capable). I had to modify the LMS1xx driver to account for that. (Unfortunately, I modified the older LMS1xx driver, rather than the newer lms1xx driver. If I learn that the newer driver also needs this patch, I will submit it.)
Anyway, if somebody with more expertise than I in gmapping (which is pretty much everybody on this list) could take a look at my bag file and suggest what I should do to get better results, I would appreciate any tips you could give me.
I have run rviz on the bag file with the fixed frame set to /odom and the laser scan decay time set to infinite, and the map from laser data & odometry alone looks ok -- there is certainly some drift in the odometry, but not much. And that result looks significantly better than the map produced by slam_gmapping.
I wonder if the fact that the SICK is mounted upside down is causing a problem -- I think I have the static TF set up correctly to account for that:
node pkg="tf" type="static_transform_publisher" name="sick2bot"
args="0.267 -0.256 0 0 0 3.14159 base_link laser 100"
I wonder if the fact that the SICK is mounted right of center is causing a problem.
I wonder if I should collect the data using the newer SICK driver. (Unfortunately, I'm at IROS right now, and I left my robot in Boston, so I'll have to wait a week to try that.)
Our robot publishes telemetry at 200 Hz, the SICK publishes scans at 25 Hz; I wonder if that is causing a problem.
What else should I look at?
EDIT :
Here is gmapping result
gmapping result http://answers.ros.org/upfiles/14109570815496716.png
Here is acumilation Lasers in rviz
Laser scans http://answers.ros.org/upfiles/14109573119452673.png
:: This looks much more like what I expected when I wheeled my robot down the hall (on the right), turned right, wheeled down the corridor, round the corner, down to the lunchroom, and back again. Clearly, there is some drift in the odometry as I return to my starting point, but it doesn't look that bad to me.
Originally posted by wpd on ROS Answers with karma: 249 on 2014-09-16
Post score: 0
Original comments
Comment by wpd on 2014-09-16:
I just found this question where the author posted code setting the x and y values in his odom_trans structure to -x and -y respectively. Is that common/required practice for publishing odom->base_link frames?
Comment by bvbdort on 2014-09-17:
Placement of laser doesnot effect gmapping , It can handle whether laser is up side down or not. Try running gmapping with reduced rate rosbag play -r 0.25 halls.bag --clock. Also please share the map ur getting currently.
Comment by wpd on 2014-09-17:
Thank you for your reply. I would expect that slam_gmapping should work regardless of the orientation and placement of the laser. The results I get look so bad that I expect I have set something up incorrectly... (continued next comment)
Comment by wpd on 2014-09-17:
Perhaps I specified the placement or orientation incorrectly -- I think they're correct, but perhaps somebody else could look at my description (upside down, 0.267m forward and 0.256m right of center) and my static transform and tell me I specified it incorrectly. (continued...)
Comment by wpd on 2014-09-17:
Perhaps I specify the odometry incorrectly. I'm confused by the question I referenced above where the author used -x and -y in the TF transform. That doesn't make sense to me, but perhaps that's what I'm supposed to do.
Comment by wpd on 2014-09-17:
Perhaps the lms100 is reporting distances in mm instead of meters. Again, I don't think any of these "Perhaps"'s are true. But they represent the paths I've taken to try to figure out what's going on. I'm hoping somebody will read this thread and suggest something else for me to check.
|
Hello,
I'm having a problem with catkin_make. When I run run it i get
-- Using these message generators: gencpp;genlisp;genpy
-- tum_ardrone: 1 messages, 5 services
-- Configuring done
CMake Error at rosberry_pichopter/CMakeLists.txt:94 (add_library):
Cannot find source file:
src/rosberry_pichopter/joy/src/servo.cpp
Tried extensions .c .C .c++ .cc .cpp .cxx .m .M .mm .h .hh .h++ .hm .hpp
.hxx .in .txx
-- Build files have been written to: /home/donni/catkin_ws/build
make: *** [cmake_check_build_system] Error 1
Invoking "make cmake_check_build_system" failed
With a bunch of stuff before it. I don't know why I'm getting the error, becouse servo.cpp is in catkin_ws/src/rosberry_pichopter/joy/src/servo.cpp.
I'm using hydro.
Originally posted by dshimano on ROS Answers with karma: 129 on 2014-09-16
Post score: 0
|
I am testing the arm_navigation's collision detection for a virtual object added to the planning scene and a virtual object attached to the robot. When using CYLINDER models, the collision detection fails; the results are OK when using BOX models. Using CYLINDER models is better in my case, so if somebody knows the reason, please tell me.
Here is the detail:
Based on the tutorial: http://wiki.ros.org/arm_navigation/Tutorials
I modified a bit because of the difference of ROS version.
My ROS version: Groovy
Robot: Simulated PR2
Language: Python
First, I added a virtual CYLINDER object to the planning scene through the /environment_server/set_planning_scene_diff service. Also, I attached a virtual CYLINDER object to the robot's left gripper. Then, I checked the collision during moving the robot's arm through the planning_scene_validity_server/get_state_validity service.
It could detect collisions between 1. the robot's body and the virtual scene object, and 2. the virtual robot's object and the robot's body. However, it could not detect collisions between the virtual robot's object and the virtual scene object. I tested to change the SetPlanningSceneDiffRequest.operations.collision_operations parameter, but it did not work.
Here is a video to show this problem:
http://youtu.be/oRmfEwKrD9s (CYLINDER case)
By changing the objects to BOX models, it works as I expected.
The video is here: http://youtu.be/YH-MAvbKm9M (BOX case)
Many thanks!
Update: Combinations of (CYLINDER and BOX) or (BOX and CYLINDER) work well. Only the (CYLINDER and CYLINDER) case is weird.
Originally posted by akihiko on ROS Answers with karma: 113 on 2014-09-16
Post score: 2
Original comments
Comment by David Lu on 2014-09-17:
I know this was a bug I reported way back in the day. No idea what became of it or even where I reported it.
Comment by akihiko on 2014-09-17:
Thanks for the information.
|
Hi everyone,
I have a setup with a robot-arm and 3D-Sensors observing the workspace. I want the arm to execute some Pick & Place scenarios, while I move through the workspace.
Now, while planning the octomap gets respected, the created plans avoid any collisions.
But my question is, would moveIt check if an obstacle moved into the planned path, therefore making it invalid?
Here I read about the StateValidity-Service, but I am wondering if I have to implement it myself or if it happens automaticly?
Thanks in advance,
Rabe
Originally posted by Rabe on ROS Answers with karma: 683 on 2014-09-17
Post score: 0
|
Hello All,
I am developing a ground station application for a mobile robot. I usually create the qt-ros package using catkin_create_qt_pkg. I recently updated to qtcreator 5.0.2 and qtquick 2. Since I want to use QML UI for the interface and c++ for the logic part. I changed my cmake to be able to use QT5. However i still get errors whenever I use any qtquick 2 library.
For example when I try to use qquickview to define a view:
QQuickView *view = new QQuickView();
I get this error: error: undefined reference to `QQuickView::QQuickView(QWindow*)'
My question is how can solve this ?
And is it possible to use qt5/qml with cmake to develope GUI's for ROS applications.
Originally posted by AmiraJr on ROS Answers with karma: 28 on 2014-09-17
Post score: 0
|
Hi im currently doing position detection. By using Opencv color(red) detection i detect the red object and now i want to publish it in the tf and show where and the distance between the object and camera.
Originally posted by chiongsterx on ROS Answers with karma: 33 on 2014-09-17
Post score: 0
|
Is anyone using the xv-11 lidar unit with their robot? I am in the process of incorporating the lidar unit with my bot and would like to share information.
Thank you
Originally posted by Morpheus on ROS Answers with karma: 111 on 2014-09-17
Post score: 0
|
Hi,
I have a robot which is my ros_master and my local machine which is able to to list the nodes and topics and also do rosrun. I have ROS_MASTER_URI, ROS_HOSTNAME and ROS_IP correctly set on both machines. However when I want to start a package on my local machine using roslaunch ( in my case I am trying to launch gmapping) I have problems. My local machine tries to run roscore locally and it gives me errors.
Thanks in advance for helping me out.
Originally posted by zeinab on ROS Answers with karma: 88 on 2014-09-17
Post score: 0
|
Hey everyone,
I have the issue, that while running moveIt, I get transforms errors:
[ERROR] [1410959502.463512661]: Transform error: Lookup would require extrapolation into the future. Requested time 1410959502.460805976 but the latest data is at time 1410959502.410336017, when looking up transform from frame [finger_1_link_0] to frame [camera_top/camera_top_depth_optical_frame]
Now, following this guide, I debugged the transforms, I appended the results below. What I was wondering is: Why is the chain so long and looped? Might this be the root of the issue?
RESULTS: for camera_top/camera_top_depth_optical_frame to finger_1_link_0
Chain is: segment_0 -> camera_top/camera_top_link -> camera_top/camera_top_rgb_frame -> camera_top/camera_top_link -> camera_front/camera_front_link -> segment_0 -> camera_top/camera_top_depth_frame -> camera_front/camera_front_link -> palm -> segment_7 -> camera_front/camera_front_depth_frame -> camera_front/camera_front_rgb_frame -> finger_1_link_1 -> finger_1_link_0 -> finger_1_link_2 -> finger_2_link_1 -> finger_2_link_0 -> finger_2_link_2 -> finger_middle_link_1 -> finger_middle_link_0 -> finger_middle_link_2 -> palm -> palm -> palm -> segment_6 -> segment_0 -> segment_1 -> segment_2 -> segment_3 -> segment_4 -> segment_5
Net delay avg = 0.055249: max = 0.117458
Frames:
Frame: camera_front/camera_front_depth_frame published by /camera_front_base_link Average Delay: -0.00940285 Max Delay: 0
Frame: camera_front/camera_front_link published by /second_kinect_broadcaster Average Delay: -0.0993219 Max Delay: 0
Frame: camera_front/camera_front_link published by /second_kinect_broadcaster Average Delay: -0.0993219 Max Delay: 0
Frame: camera_front/camera_front_rgb_frame published by /camera_front_base_link_1 Average Delay: -0.00940964 Max Delay: 0
Frame: camera_top/camera_top_depth_frame published by /camera_top_base_link Average Delay: -0.00945783 Max Delay: 0
Frame: camera_top/camera_top_link published by /camera_top_link_broadcaster Average Delay: -0.00942012 Max Delay: 0
Frame: camera_top/camera_top_link published by /camera_top_link_broadcaster Average Delay: -0.00942012 Max Delay: 0
Frame: camera_top/camera_top_rgb_frame published by /camera_top_base_link_1 Average Delay: -0.00942155 Max Delay: 0
Frame: finger_1_link_0 published by /robot_tf_state_publisher Average Delay: -0.496956 Max Delay: 0
Frame: finger_1_link_1 published by /robot_tf_state_publisher Average Delay: 0.00512914 Max Delay: 0.0639241
Frame: finger_1_link_2 published by /robot_tf_state_publisher Average Delay: -0.496976 Max Delay: 0
Frame: finger_2_link_0 published by /robot_tf_state_publisher Average Delay: -0.496954 Max Delay: 0
Frame: finger_2_link_1 published by /robot_tf_state_publisher Average Delay: 0.00513452 Max Delay: 0.0639276
Frame: finger_2_link_2 published by /robot_tf_state_publisher Average Delay: -0.496969 Max Delay: 0
Frame: finger_middle_link_0 published by /robot_tf_state_publisher Average Delay: -0.496951 Max Delay: 0
Frame: finger_middle_link_1 published by /robot_tf_state_publisher Average Delay: 0.00513811 Max Delay: 0.0639306
Frame: finger_middle_link_2 published by /robot_tf_state_publisher Average Delay: -0.496963 Max Delay: 0
Frame: palm published by /robot_tf_state_publisher Average Delay: -0.496948 Max Delay: 0
Frame: palm published by /robot_tf_state_publisher Average Delay: -0.496948 Max Delay: 0
Frame: palm published by /robot_tf_state_publisher Average Delay: -0.496948 Max Delay: 0
Frame: palm published by /robot_tf_state_publisher Average Delay: -0.496948 Max Delay: 0
Frame: segment_1 published by /robot_tf_state_publisher Average Delay: 0.00514182 Max Delay: 0.0639336
Frame: segment_2 published by /robot_tf_state_publisher Average Delay: 0.00514508 Max Delay: 0.0639368
Frame: segment_3 published by /robot_tf_state_publisher Average Delay: 0.00514861 Max Delay: 0.0639399
Frame: segment_4 published by /robot_tf_state_publisher Average Delay: 0.00515163 Max Delay: 0.063943
Frame: segment_5 published by /robot_tf_state_publisher Average Delay: 0.00515438 Max Delay: 0.0639463
Frame: segment_6 published by /robot_tf_state_publisher Average Delay: 0.00515731 Max Delay: 0.0639497
Frame: segment_7 published by /robot_tf_state_publisher Average Delay: 0.00515987 Max Delay: 0.0639524
All Broadcasters:
Node: /camera_front_base_link 99.3663 Hz, Average Delay: -0.00940285 Max Delay: 0
Node: /camera_front_base_link_1 99.4127 Hz, Average Delay: -0.00940964 Max Delay: 0
Node: /camera_front_base_link_2 99.3969 Hz, Average Delay: -0.00941993 Max Delay: 0
Node: /camera_front_base_link_3 99.4064 Hz, Average Delay: -0.00940085 Max Delay: 0
Node: /camera_top_base_link 99.4075 Hz, Average Delay: -0.00945783 Max Delay: 0
Node: /camera_top_base_link_1 99.4135 Hz, Average Delay: -0.00942155 Max Delay: 0
Node: /camera_top_base_link_2 99.4092 Hz, Average Delay: -0.00940796 Max Delay: 0
Node: /camera_top_base_link_3 99.4122 Hz, Average Delay: -0.00936636 Max Delay: 0
Node: /camera_top_link_broadcaster 99.404 Hz, Average Delay: -0.00942012 Max Delay: 0
Node: /pinch_frame_broadcaster 99.4173 Hz, Average Delay: -0.00934003 Max Delay: 0
Node: /robot_tf_state_publisher 110.242 Hz, Average Delay: -0.451271 Max Delay: 0.012499
Node: /second_kinect_broadcaster 9.94165 Hz, Average Delay: -0.0993219 Max Delay: 0
Here is my transform tree:
I'm thankful for any hints,
Rabe
Originally posted by Rabe on ROS Answers with karma: 683 on 2014-09-17
Post score: 0
|
Using fuerte/rosbuild, I was able to "cheat" and both generate cpp/py protobuf files and link/import them from other projects. I say "cheat" because the generated artifacts ended up in the package source directory and was easy to find. But hydro/catkin does things "right" and I can't cheat anymore.
Say I have two packages: my_package_msgs and my_package_nodes. In my_package_msgs, there's a proto directory with *.proto message files. I use an add_custom_command() call to generate the protobuf artifacts and output directories. Here's basically how its written right now:
cmake_minimum_required(VERSION 2.8.3)
project(my_package_msgs)
find_package(catkin REQUIRED)
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake)
find_package(ProtocolBuffers REQUIRED)
set(proto_dir ${PROJECT_SOURCE_DIR}/proto)
set(proto_files ${proto_dir}/Message0.proto
${proto_dir}/Message1.proto
${proto_dir}/Message2.proto)
message(STATUS "Proto Source Dir: ${proto_dir}")
message(STATUS "Proto Source Files: ${proto_files}")
# Set up destination directories
catkin_destinations()
set(proto_gen_dir ${CATKIN_DEVEL_PREFIX}/${CATKIN_GLOBAL_INCLUDE_DESTINATION}/proto_gen)
set(proto_gen_cpp_dir ${proto_gen_dir}/cpp/include/${PROJECT_NAME})
set(proto_gen_python_dir ${proto_gen_dir}/python)
file(MAKE_DIRECTORY ${proto_gen_dir})
file(MAKE_DIRECTORY ${proto_gen_cpp_dir})
file(MAKE_DIRECTORY ${proto_gen_python_dir})
set(protogen_include_dirs ${proto_gen_cpp_dir}/../ ${proto_gen_python_dir})
message(STATUS "Proto Include Dirs: ${protogen_include_dirs}")
# Create lists of files to be generated.
set(proto_gen_cpp_files "")
set(proto_gen_python_files "")
foreach(proto_file ${proto_files})
get_filename_component(proto_name ${proto_file} NAME_WE)
list(APPEND proto_gen_cpp_files ${proto_gen_cpp_dir}/${proto_name}.pb.h ${proto_gen_cpp_dir}/${proto_name}.pb.cc)
list(APPEND proto_gen_python_files ${proto_gen_python_dir}/${proto_name}_pb2.py)
endforeach(proto_file ${proto_files})
# Run protoc and generate language-specific headers.
add_custom_command(
COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --proto_path=${proto_dir} --cpp_out=${proto_gen_cpp_dir} --python_out=${proto_gen_python_dir} ${proto_files}
DEPENDS ${PROTOBUF_PROTOC_EXECUTABLE} ${proto_files}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
OUTPUT ${proto_gen_cpp_files} ${proto_gen_python_files}
)
# Create proto library for linking.
include_directories(${PROTOBUF_INCLUDE_DIR} ${PROTOBUF_INCLUDE_DIR}/../../)
add_library(${PROJECT_NAME}_proto ${proto_gen_cpp_files})
target_link_libraries(${PROJECT_NAME}_proto ${PROTOBUF_LIBRARY})
catkin_package(
INCLUDE_DIRS ${protogen_include_dirs}
LIBRARIES ${PROJECT_NAME}_proto
)
install(TARGETS ${PROJECT_NAME}_proto
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
install(DIRECTORY ${proto_gen_cpp_dir}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
FILES_MATCHING PATTERN "*.h"
)
install(DIRECTORY ${proto_gen_python_dir}/
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
I tried using catkin_install_python(), but it complains about files not existing (because they haven't been generated). I haven't tried using catkin_python_setup() because of the same problem, and because the output isn't a normal python library layout.
Then, my_package_nodes 'run_depend's on my_package_msgs. But, when I attempt to rosrun one of the python executables in my_package_nodes, I get errors that say python modules aren't found:
Traceback (most recent call last):
File "/home/dgooding/catkin_ws/src/my_packages/my_package_nodes/src/Node2.py", line 8, in <module>
import Message2_pb2
ImportError: No module named Message2_pb2
So what's the correct (not cheating) way to generate cpp and python files in one catkin package, and have a second package access them?
edit: I've adopted some techniques from http://answers.ros.org/question/123221/where-should-generated-header-files-be-generated-to-how-can-i-then-export-them-with-catkin/ that have helped... but not completely done yet.
edit2: I tried making use of catkin_python_setup() (using some configure_file() magic to find the generated python files), but I think catkin_python_setup() is being executed before the code generation happens because I end up with an empty directory in the dist-packages area in devel space.
Originally posted by dustingooding on ROS Answers with karma: 139 on 2014-09-17
Post score: 2
|
Hello , I am interested to work with ROS as a part of OPW internship for this winter . Can somebody please help me as to how I should start working or if there is any person whom I can contact for further information .
Thanks ,
Regards,
Myra
Originally posted by Myra on ROS Answers with karma: 1 on 2014-09-17
Post score: 0
|
Hello,
I am putting a place a mobile robot with mecanum wheels. For my robot to be driven I would like to be able to specify a desired velocity to the platform (a twist) and have the robot decide itself about the velocity of the wheels (joint velocity) that lead to this desired twist: this is what I call the mobile platform's IK. This IK is not so complex to compute but I am afraid I do not understand where to perform the computation in ROS. So what is the best practise to implement this mobile IK? And where should I put the code?
I had a look at the husky packages but could not find anything relevant. I am pretty sure many robot simulations do have the same need and implement this, no?
Anyone having a better understanding than me? Any example code?
Thanks,
Antoine.
Originally posted by arennuit on ROS Answers with karma: 955 on 2014-09-17
Post score: 0
|
I don't know publisher image and camera
Can you help me ?
I use ros fuerte .
I want to publisher camera but i don't now !
Originally posted by turtle on ROS Answers with karma: 17 on 2014-09-17
Post score: 0
Original comments
Comment by bvbdort on 2014-09-17:
Please write more about what your looking for.
Comment by joq on 2014-09-18:
There are many ROS camera driver packages. Use one that supports your device: http://wiki.ros.org/Sensors/Cameras .
Comment by joq on 2014-09-18:
If you tell us what kind of camera you have, someone can probably suggest a good driver.
|
Hi dear ROS community!
I've started learning ROS a few months ago. Currently I'm trying to create my first launchfile that starts gscam and image_view packages to open a camera and visualize the video stream. I made a launch file, however, gscam and image_view cannot get connected. Gscam node is launching properly, but not the image_view one; it seems that it is not receiving the parameter. If I launch image_view manually ($rosrun image_view image_view image:=/camera/image_raw) everything works fine and I can visualize the video stream from my camera. How can I fix my launch file? Thank you!
<launch>
<node pkg="gscam" type="gscam" name="gscam01">
<env name="GSCAM_CONFIG" value="v4l2src device=/dev/video0 ! video/x-raw-rgb,framerate=30/1 ! ffmpegcolorspace" />
</node>
<node pkg="image_view" type="image_view" name="image_view01">
<param name="image" type="string" value="/camera/image_raw" />
</node>
SOLUTION:
launch>
<node pkg="gscam" type="gscam" name="gscam01">
<env name="GSCAM_CONFIG" value="v4l2src device=/dev/video0 ! video/x-raw-rgb,framerate=30/1 ! ffmpegcolorspace" />
</node>
<node pkg="image_view" type="image_view" name="image_view01">
<remap from="image" to="/camera/image_raw"/>
</node>
Originally posted by diegomex on ROS Answers with karma: 13 on 2014-09-17
Post score: 0
|
I've tried searching for this but have been unsuccessful so far.
Is it possible to see a topic's value, programatically?
Basically, I need the functionality of rostopic echo, but I need to add it to my code.
It's a topic that I created myself, so I can't just create a client and call a service...
And I also do not want to subscribe to the topic and create a callback function.
I just want to know the topic's value at a given point in my code.
Is this possible?
Originally posted by an on ROS Answers with karma: 1 on 2014-09-17
Post score: 0
|
Hi guys
im working on a buffer service that saves a bunch of data i get from a detection node.
I also want to save the tf data from a moving object in that service.
Now my question is should i avoid the while(ros::ok()) loop i need to listen to the tf data, by writing a new node that sends the tf data to the service if it changed or is it usual to use while loops in ros services. Im just a little bit worried that the callback from the client would be delayed or the program would get stuck in the loop.
Thanks
Alex
Originally posted by AlexKolb on ROS Answers with karma: 30 on 2014-09-17
Post score: 0
|
Hello,
I'm debugging the navigation stack set up on my robot. My question is about amcl_pose. This topic is published by amcl (or fake_localization), and in my robot amcl_pose is being published to by the localization module too, but no topic is subscribed to amcl_pose.
As I understand, local planner should use amcl_pose and I'm using DWAPlannerROS in the navigation stack, and DWAPlannerROS is only subscribed to odom.
So, the question is which component is using the result of the localization module.
UPDATE 1: Another suspicious thing with amcl (or fake_localization) is, although it is publishing to /particlecloud, but I can not visualize the particles in Rviz using /particlecloud topic. Nothing appears in Rviz.
Thanks
Originally posted by ROSCMBOT on ROS Answers with karma: 651 on 2014-09-18
Post score: 2
Original comments
Comment by Rick Armstrong on 2014-09-18:
Regarding the /particlecloud visualization in Rviz: I vaguely recall having trouble seeing this topic and I think it was because I had the wrong FixedFrame selected in Rviz. You can see the /particlecloud topic in Rviz if you have the right settings.
Comment by ROSCMBOT on 2014-09-19:
My FixedFrame is map. and everything works and looks fine in rviz, except particlecloud
Comment by Rick Armstrong on 2014-09-26:
Please forgive if this is a dumb question, but have you given the robot an initialpose? Another: does your particlecloud topic show up in the 'By Topic ' tab of the Add dialog in Rviz?
|
I want to buy a real robot so that I could test out some of my source code and see how it works on a real robot. I wanted to buy TurtleBot but it is very expensive around $2000. Can anybody suggest any cheaper ones??
Thanks
Originally posted by ish45 on ROS Answers with karma: 151 on 2014-09-18
Post score: 2
Original comments
Comment by Rabe on 2014-09-18:
You should maybe specify, what your robot should be able to do, which sensors it should have and so on. You could look into kits, that transform your smartphone into a driving robot. Would propably be the best bang for the buck
|
for developping low level interfaces for a new robot, are there existing standard messages for battery level (in %) or voltage, or thresholds (low) / flags (charging / dis-charging) ?
Originally posted by mherrb on ROS Answers with karma: 148 on 2014-09-18
Post score: 1
|
Hi all,
Is it possible to specify loops with different frequencies in ros_control?
More specifically I am thinking about a low-level loop for motor control (I am thinking something like 200 1000Hz) and a somehow higher level loop for differential or mecanum drive control (something between 30Hz and 100Hz)?
These loops can both use ros_control but the higher the level, the slower the requirements for the speed of the loop...
Thanks,
Antoine.
Originally posted by arennuit on ROS Answers with karma: 955 on 2014-09-18
Post score: 1
|
Hi dear ROS community!
I would like to learn how to use multiple RGB cameras with ROS in a single PC. I already know how to use a single camera with the gscam package, I can read the video stream in the /camera/image_raw topic.
However, I couldn't find any information about the use of multiple cameras with ROS (let's say 3). I would like to create a node for each camera ( e.g. /camera01/image_raw, /camera02/image_raw, etc.) and then process them separatedly. I don't know if this is possible using gscam.
In general, what is the proper way (good practice) to use multiple RGB cameras with ROS?
Thank you very much!
Originally posted by diegomex on ROS Answers with karma: 13 on 2014-09-18
Post score: 1
Original comments
Comment by Rabe on 2014-09-18:
I don't know about RGB, but for RGBD it is possible to use multiple Kinects. Look into the openni_launch-package. There you can give each camera it's own namespace, like you proposed in your question
|
My project complies just file, and I am expecting ROS to generate my docs, but I keep receiving a failure!
The console log located here:
http://jenkins.ros.org/job/devel-hydro-ar_sys/61/console
What shall I do?
Originally posted by sahloul on ROS Answers with karma: 18 on 2014-09-18
Post score: 0
|
i've finished my opencv program ! I want to publish to pc other (LAN)
What should I do
Here my program opencv
/*
* roscolor.cpp
*
* Created on: Sep 5, 2014
* Author: dell
*/
#include <ros/ros.h>
#include <vector>
#include <stdio.h>
#include <math.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <sensor_msgs/Image.h>
#include <opencv2/opencv.hpp>
using namespace cv;
int minR = 0;
int minG = 0;
int minB = 0;
int maxR = 255;
int maxG = 255;
int maxB = 255;
void imageCallback(const sensor_msgs::ImageConstPtr& color_img)
{
cv_bridge::CvImagePtr img_ptr;
cv::Mat img_rgb;
try
{
img_ptr = cv_bridge::toCvCopy(color_img,
sensor_msgs::image_encodings::BGR8);
img_rgb = img_ptr->image;
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: \%s", e.what());
return;
}
cvCreateTrackbar("min R:","trackbar",&minR, 255);
cvCreateTrackbar("min G:","trackbar",&minG, 255);
cvCreateTrackbar("min B:","trackbar",&minB, 255);
cvCreateTrackbar("max R:","trackbar",&maxR, 255);
cvCreateTrackbar("max G:","trackbar",&maxG, 255);
cvCreateTrackbar("max B:","trackbar",&maxB, 255);
//cv::Mat img_hsv;
cv::Mat img_binary;
CvMoments colorMoment;
cv::Scalar min_vals(minR, minG, minB);
cv::Scalar max_vals(maxR, maxG, maxB);
//cv::cvtColor(img_rgb, img_hsv, CV_BGR2HSV);
cv::inRange(img_rgb, min_vals, max_vals, img_binary);
dilate( img_binary, img_binary, getStructuringElement(MORPH_ELLIPSE, Size(10, 10)) );
/*======================= TOA DO ================================*/
colorMoment = moments(img_binary);
double moment10 = cvGetSpatialMoment(&colorMoment, 1, 0);
double moment01 = cvGetSpatialMoment(&colorMoment, 0, 1);
double area = cvGetCentralMoment(&colorMoment, 0, 0);
float posX = (moment10/area);
float posY = (moment01/area);
/*================= HIEN THI =================================*/
printf("1. x-Axis %f y-Axis %f Area %f\n", moment10, moment01, area);
printf("2. x %f y %f \n\n", posX , posY);
cv::imshow("TRACKING COLOR", img_binary);
cv::imshow("RGB image", img_rgb);
cv::waitKey(3);
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "HSV_image");
ros::NodeHandle nh;
cvNamedWindow("TRACKING COLOR", 2 );
cvNamedWindow("RGB image", 2 );
cvNamedWindow ("trackbar", 2 );
cvStartWindowThread();
image_transport::ImageTransport it(nh);
ros::Subscriber cam_img_sub =nh.subscribe("/gscam/image_raw", 1, &imageCallback);
ros::spin();
cvDestroyWindow("TRACKING COLOR");
cvDestroyWindow("RGB image");
}
can you help me, please ?
Thank you!
Originally posted by turtle on ROS Answers with karma: 17 on 2014-09-18
Post score: 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.