instruction
stringlengths 40
28.9k
|
---|
Is there any IR proximity sensor model/plugin for Gazebo?
Thanks!
Originally posted by z.xing on ROS Answers with karma: 3 on 2014-05-17
Post score: 0
|
I'm running amcl package for offline data and I want measure its running time, Is there something like MATLAB's tic and toc command?
Originally posted by maysamsh on ROS Answers with karma: 139 on 2014-05-17
Post score: 0
|
Hi,
I'm trying to rosmake a self-written node in Hydro on armhf, but it fails on a missing CvBridge.h. "roscd cv_bridge" takes me to /opt/ros/hydro/share/cv_bridge$ and "ls" shows cmake and package.xml. So where is the header file ?
Thanks,
building vision-opencv en cv-bridge bring the message that the stacks are not found and it assumes the new build system is being used. Also the catkin stack is not found, while ros-hydro-catkin is installed and newest version. So if Hydro indeed does support rosbuild, why is it asking for catkin?
Originally posted by hvn on ROS Answers with karma: 72 on 2014-05-17
Post score: 0
|
I tried to run pr2 moveit tutorials by launching "roslaunch pr2_moveit_config demo.launch"
But rviz start and died in few seconds and console gives following error.
/opt/ros/hydro/lib/rviz/rviz: symbol lookup error:
/opt/ros/hydro/lib/libmoveit_planning_scene_monitor.so: undefined symbol: _ZN3ros7console5printEPNS0_10FilterBaseEPvNS0_6levels5LevelEPKciS7_S7_z
I updated moveit plugin and tried, but problem remains and I am new to ROS and moveIt.
Please help.
Originally posted by dinesh_sl on ROS Answers with karma: 13 on 2014-05-18
Post score: 0
|
Hello everybody! Where can I find the map 2d file structure of the output of mapserver (gmapping)?
Thanks
Originally posted by alex920a on ROS Answers with karma: 35 on 2014-05-18
Post score: 0
|
hi , i am working on a mobile robot with ROS , and i want to have navigation ability without any odometry data from my motors . hector_slam is run very good and i have no problem with it , i can get pos and yaw of my robot with a hokuyo laser sensor . the map is published to /map topic .
for navigation and planning i want to use the move_base without map_server and AMCL . i dont have any odom topic .
what is the launch file for do this exactly?
what is the param config for move_base , because there is no update for costmaps from rviz ?! ... i seems its not get my /map topic for calculations... ? ( global_cost_map is STATIC , local_cost_map is not STATIC and windowing is enabled !? .
1 ) how can i send /map to move_base ?
2 ) why we need odom tf ( odom => base_link ) ?
3 ) is odom topic necessary for move_base?
4 ) what is paramers config for move_base for dynamic SLAM navigation ?
5 ) what is a launch file for this to get worked ?
thx for any reply :) .... i just confused a bit :( .
Originally posted by edwin on ROS Answers with karma: 36 on 2014-05-18
Post score: 0
Original comments
Comment by David Lu on 2014-05-19:
What version of ROS are you using? Can you post what configuration you're already using?
|
Hi all,
I'm just trying to use the MAVLink_ROS package, but I cannot find any document/wiki/tutorial related.
Its page on ROS wiki has been removed, also no wiki in the github or qgroundcontrol.org
Could anyone tell me where can I find some useful information?
Thanks
Originally posted by lanyusea on ROS Answers with karma: 279 on 2014-05-18
Post score: 0
|
I want to add collision objects to the world using a frame that is broadcast by a node. Setting
collision_object.header.frame_id = "task_frame"
results in the planner responding that
Unable to transform from frame 'task_frame' to frame '/base_link'. Returning identity.
I tried setting my
collision_object.header.stamp = ros::Time::now();
but no luck. I know task_frame exists, because I can run through tf_echo just fine. I can also manually transform poses, but the code will soon become really ugly as the list of objects increases.
Am I using collision_object.header wrong?
frames.png
Originally posted by paturdc on ROS Answers with karma: 157 on 2014-05-19
Post score: 0
Original comments
Comment by Maya on 2014-05-19:
Can you upload on image of the tf using $ rosrun tf view_frames $ evince frames.pdf ? And I know its working but could you add the output of rosrun tf tf_echo /map /odom ?
Comment by paturdc on 2014-05-19:
I don't have /map or /odom transforms, are they required? I'm transforming directly from the base_link of the robot (which is stationary), to a particular configuration of the end_effector. I'll edit my question to include an image of the tf.
Comment by paturdc on 2014-05-19:
Weird, it suddenly stared working, I didn't change anything. I suppose there might be some suboptimal way I've set up my transforms/timing/planning_scene somewhere.
Comment by paturdc on 2014-05-19:
And now it is back where I started. I even added a wait for transform before setting my collision_object.header to prove to myself that the frame is coming through. Every node that I can think of can see "task_frame" except my planner. I'm out of ideas.
Comment by Maya on 2014-05-19:
It's probably a typo somewhere but when I look at your image, I don't see any task_frame, I see task_space...
Comment by paturdc on 2014-05-19:
Yeah, it's a type in the question. But everything is spelled correctly in the code. I imagine it has to do with the timestamp, embedded in the collision_object, but I can't find a solution that works.
Comment by Maya on 2014-05-20:
Ok osrry it's the end of my knowledge. I have no idea why it does not work :/
|
I'm using the Roboteq SDC2130 Motor Controller and using the ros-hydro-roboteq-driver package and supporting packages (i.e. msgs and diagnostics) to control my motor controller. The package only supports one channel and I'm trying to have ROS support two channels (SDC2130 is a dual channel controller). Before I create a new catkin package of ros-hydro-roboteq-driver and edit the package (I don't want to edit this in /opt/ros/hydro), I was wondering if I'm missing an obvious way to access both channels. (?)
Currently I'm planning to output something like:
rostopic pub -1 (roboteq_cmd_topic) (topic_type) -- (output for Motor 1) (output for Motor 2)
To do this, I'll have to change from float32 to float32[] (MultiArray), but it seems like I'll have to edit the driver code too. I'd hate to have missed an obvious solution to this, if someone is aware of the solution, please steer me in the right direction. Also if there is a better alternative to the editing the code, I'd really appreciate your advise. I'm new to ROS (2 weeks) and trying to expose myself to it with hardware that I have.
Originally posted by eve on ROS Answers with karma: 13 on 2014-05-19
Post score: 1
Original comments
Comment by Maya on 2014-05-19:
If your catkin workspace space is before /opt/ros/hydro in the ROS_PACKAGE_PATH variable, you can git clone the ros-hydro-roboteq-driver package in your catkin workspace and it will be use instead of the /opt/ros/hydro one ;). Helpful trick if you want to work on its code without changing the /opt.
Comment by eve on 2014-05-19:
Thanks for the response maya, thats exactly what I'm planning to do. I'm using this reference
http://answers.ros.org/question/9197/for-new-package-downloading/
to git clone to my ~/catkin_ws/src/. Ofcourse I'll have to edit it for hydro. Good to know I'm heading the right way :)
Comment by Maya on 2014-05-19:
Isn't your package already made to work with catkin ? If yes, you just need to git clone it in your workspace and it should work.
Comment by eve on 2014-05-20:
I sure did! I tried to debug it in my eclipse IDE but I couldnt run the node. I get the error Invalid node name [~]. I have managed to access both channels thanks to ahendrix though! :)
|
Following these instructions http://wiki.ros.org/rosjava/Tutorials/hydro/Installation, at the step when rosdep install --from-paths src -i -y is executed
I get
ERROR: the following packages/stacks could not have their rosdep keys resolved
to system dependencies:
rosjava_build_tools: No definition of [python-rosinstall-generator] for OS [osx]
This error looks similar to this error.
I checked rosjava_build_tools folder and .rosinstall is missing.
I added rosjava_build_tools to ROS_PACKAGE_PATH following the advice given here.
Any help is appreciated.
EDIT: moving to the next step and calling catkin_make does not produce errors.
Originally posted by Artem on ROS Answers with karma: 709 on 2014-05-19
Post score: 0
|
I try to run:
rosrun rqt_reconfigure rqt_reconfigure
and I get:
CompositePluginProvider.discover() could not discover plugins from provider "<class 'rqt_gui.rospkg_plugin_provider.RospkgPluginProvider'>":
Traceback (most recent call last):
File "/opt/ros/hydro/lib/python2.7/dist-packages/qt_gui/composite_plugin_provider.py", line 58, in discover
plugin_descriptors = plugin_provider.discover(discovery_data)
File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_gui/ros_plugin_provider.py", line 65, in discover
plugin_descriptors += self._parse_plugin_xml(package_name, plugin_xml)
File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_gui/ros_plugin_provider.py", line 141, in _parse_plugin_xml
module_name, class_from_class_type = attributes['class_type'].rsplit('.', 1)
ValueError: need more than 1 value to unpack
qt_gui_main() found no plugin matching "rqt_reconfigure"
How can I solve this?
EDIT:
rqt --list-plugins
returns:
CompositePluginProvider.discover() could not discover plugins from provider "<class 'rqt_gui.rospkg_plugin_provider.RospkgPluginProvider'>":
Traceback (most recent call last):
File "/opt/ros/hydro/lib/python2.7/dist-packages/qt_gui/composite_plugin_provider.py", line 58, in discover
plugin_descriptors = plugin_provider.discover(discovery_data)
File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_gui/ros_plugin_provider.py", line 65, in discover
plugin_descriptors += self._parse_plugin_xml(package_name, plugin_xml)
File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_gui/ros_plugin_provider.py", line 141, in _parse_plugin_xml
module_name, class_from_class_type = attributes['class_type'].rsplit('.', 1)
ValueError: need more than 1 value to unpack
rqt_dep.ros_pack_graph.RosPackGraph
rqt_image_view/ImageView
rqt_py_console.py_console.PyConsole
rqt_rviz/RViz
rqt_shell.shell.Shell
rqt_web.web.Web
Originally posted by McMurdo on ROS Answers with karma: 1247 on 2014-05-19
Post score: 4
Original comments
Comment by 130s on 2014-05-19:
Do you have rqt_reconfigure package installed on your machine? What does this command return? $ rospack find rqt_reconfigure?
Comment by McMurdo on 2014-06-10:
Of course, yes!
It returns:
/opt/ros/hydro/share/rqt_reconfigure
|
Hello,
I'm trying to run a node from rosrun (it works when I manually enter ./program inside build directory). Yet it doesn't work when called by rosrun - it finds the directory by tab completion, but not the executable. How can I help it? I need it to use it in launch files.
[rosrun] Found the following, but they're either not files,
[rosrun] or not executable:
[rosrun] /home/user/ros_ws/catkin_ws/src/ros_programme
Originally posted by delta785 on ROS Answers with karma: 72 on 2014-05-19
Post score: 2
Original comments
Comment by LikeSmith on 2014-05-19:
did you run the setup script for your catkin workspace?
Comment by delta785 on 2014-05-19:
Yes. The problem lies in the location of the executables. They're in catkin_ws/build/package_name and they should be somewhere in devel/lib I think. I guess it's something with the CMake file inside package source, but no idea what exactly.
Comment by dornhege on 2014-05-19:
What is the exact command line of your rosrun? What is that package setup and ROS_PACKAGE_PATH?
Comment by delta785 on 2014-05-19:
rosrun ros_programme ros_programme - doesn't work
./ros_programme (build/ros_programme directory) works
I'm not sure what you mean by package setup, I'm a little bit green here. After sourcing many things, this is the ROS_PACKAGE_PATH.
/home/user/ros_ws/catkin_ws/install/share:/home/omikron/ros_ws/catkin_ws/install/stacks:/home/user/ros_ws/catkin_ws/src:/opt/ros/hydro/share:/opt/ros/hydro/stacks
Comment by dornhege on 2014-05-19:
Where is your code and "sourcing many things" would be the package setup. Is the binary in any of the paths in your package path? e.g. .../install
Comment by delta785 on 2014-05-19:
Additionally, the packages I put inside catkin_ws (downloaded from miscellaneous locations) usually allow me to easily use rosrun. This is the CMakeLists.txt of my package (also downloaded, but clearly somehow incomplete)
http://pastebin.com/XNkxU93i
Comment by delta785 on 2014-05-19:
The binary is only in build directory (catkin_ws/build/ros_programme).
Comment by Hamid Didari on 2014-05-19:
did you type ($source devel/setup.bash) in your catkin_ws before typing rosrun ...
|
Dear all,
I am currently porting some code from fuerte to hydro and came across a problem that I do not fully understand.
In fuerte I used the arrow marker, and there the length of the arrow is encoded as scale.z. On hydro, scale.z seems to scale the arrow in all dimensions.
The ROS wiki see here says:
"scale.x is the arrow width, scale.y is the arrow height and scale.z is the arrow length ".
However, if I set scale.x and scale.y to the same value (0.1), I get an upward-oval shaped arrow, which is twice as high as wide!?!?
Does anyone have some insight on this?
Best
Georg
Originally posted by Georg on ROS Answers with karma: 328 on 2014-05-19
Post score: 0
|
Because I had installed ogre1.8,
then, I installed ogre1.9 again.
Now ,when I build gazebo 2.0
It comes to an error
Bad Ogre3d version: gazebo using 1.9.0
ogre version has known bugs in runtime
(issue #996). Please use 1.7 or 1.8
series
I don't know how to assign gazebo ogre 1.8
I tried to use cmake-gui to find if there is a name such as "ogre" "path"
but I could not find such name like that.
I also tried to enter:
export OGRE_HOME=/usr/local/ogre1.8
but it still didn't work
Thanks for help!
Originally posted by Zheng yo chen on ROS Answers with karma: 1 on 2014-05-19
Post score: 0
|
Hi,
It seems I cannot retrieve an existing parameter using roscpp, neither using the node handle nor ros::param. What I want to do is access parameters of a different node. For example, let's consider a node A which wants to read the parameter of a node B, let's say move_base.
So in node A I want to read the parameter /move_base/local_costmap/height, for example. If I use
rosparam get /move_base/local_costmap/height
I get the value just fine.
If I use in node A's code either
ros::NodeHandle n;
std::string param("/move_base/local_costmap/height"), param_value;
n.getParam(param,param_value);
or
std::string param("/move_base/local_costmap/height"), param_value;
ros::param::get(param,param_value);
I do not get the parameter. On the other hand both
ros::param::has(param);
and
nh.hasParam(param);
return 1.
I know that this is not the preferred way of accessing other nodes' parameters. But I need it for documentation in a fixed setup. The limitations, .e.g, having to know a node's absolute path, are, in this case, acceptable. Can someone point me in the correct direction how to retrieve the param values or point out other ways how to retrieve parameter values for multiple nodes from one location using roscpp? I'd prefer not to use rosparam as it complicates things.
Thanks for you help.
Originally posted by torsten on ROS Answers with karma: 35 on 2014-05-19
Post score: 1
|
I'm using the PR2 and when I'm sending the date through the network I'm having lag problems. And for example, when I receive the image of one of the cameras, there's a topic in which the information is compressed (format: image_topic/compressed), and this way I can speed up things. So I would like to know if it's possible to do the pointcloud generated by the kinect in a compressed format. Or in other case, is there a launch file (or something like that) in which I can take the compressed image and depth image and convert them to pointcloud?
Originally posted by silgon on ROS Answers with karma: 649 on 2014-05-19
Post score: 0
|
Hello,
I was trying to colorize the compiler output of catkin_make. I tried using colorgcc.
As per the manual I created symlinks to c++, g++, gcc, and cc to point to colorgcc and added the directory to the PATH. When compiling without catkin_make the colorization works, with catkin_make it does not.
Any help getting this to work is much appreciated.
Kind regards,
Okke Hendriks
Originally posted by Okke on ROS Answers with karma: 131 on 2014-05-20
Post score: 0
|
Hi,
Is there a bridge between ROS and Unity?
Thanks!
Originally posted by shyamalschandra on ROS Answers with karma: 73 on 2014-05-20
Post score: 0
|
Hey!
I'm just testing out the robot_localization package with our robots. Loving the level of documentation :). However, I realized that it handles the data streams differently from robot_pose_ekf. For instance, robot_pose_ekf, expected wheel odometry to produce position data that it then applied differentially i.e. it took the position estimate at t_k-1 and t_k, transformed the difference to the odom frame, and applied it to the state estimate.
However as per the discussion about yaw velocities here, it seems robot_localization would rather just apply wheel velocities generated by the wheel odometry and generate a position information itself. I know robot_localization has a "differential" tag on each dataset but that seems to be removing initial static offsets (i.e. subtract the position estimate at t_0 , not t_k-1). I have two questions:
Are my assumptions above correct?
Am I losing anything by not doing the integration myself and relying on robot_localization to do the integration?
Originally posted by pmukherj on ROS Answers with karma: 21 on 2014-05-20
Post score: 1
|
Hi, I have a node that is crashing with a
terminate called after throwing an instance of 'std::out_of_range'
what(): vector::_M_range_check
error. The backtrace from gdb says the first call is on a ros::spinOnce() call. The whole stack trace is:
terminate called after throwing an instance of 'std::out_of_range'
what(): vector::_M_range_check
Program received signal SIGABRT, Aborted.
0x00007ffff6204037 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) back
#0 0x00007ffff6204037 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff6207698 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007ffff6b11e8d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x00007ffff6b0ff76 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007ffff6b0ffa3 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff6b10226 in __cxa_rethrow () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00007ffff7abf268 in ros::CallbackQueue::callOneCB(ros::CallbackQueue::TLS*) () from /opt/ros/hydro/lib/libroscpp.so
#7 0x00007ffff7abeacb in ros::CallbackQueue::callAvailable(ros::WallDuration) () from /opt/ros/hydro/lib/libroscpp.so
#8 0x00007ffff7b0579a in ros::spinOnce() () from /opt/ros/hydro/lib/libroscpp.so
#9 0x0000000000456892 in Planner::go (this=0x69b8e0 <my_planner>) at /my/code.cpp:962
I was wondering what it means when the code crashes on a spinOnce() call. What should I be looking for? I know there are likely many causes so I am mostly looking for direction rather than a specific answer. My node has three timers and subscribes to 1 topic. All of the callbacks are class methods and access data members in the class that all happen to be std::vectors. But I am not sure if the out_of_range error is one of my class' vectors or something in ROS that gets called in spinOnce().
Basically, all of my callbacks run until the robot reaches the goal. So the main loop looks something like this:
//start timers
timer_one.start()
timer_two.start()
timer_three.start()
while( (latestUpdate_.comparePosition(goal_, false) > goalThreshold_) && ros::ok()) {
ros::spinOnce();
}
// stop timers
timer_one.stop()
timer_two.stop()
timer_three.stop()
I feel like if one of the vectors accessed in a callback was empty, the stack trace would lead me there and not spinOnce. I don't really know what to look for so I can debug. If anyone has suggestions, I would greatly appreciate it.
Originally posted by sterlingm on ROS Answers with karma: 380 on 2014-05-20
Post score: 1
|
Hi all,
I've just discovered that to synchronize messages across different physical machines I have to use Chrony. Everithing is fine....except when I actually run my real robot, which is actuated with the xenomai module. Since then, the clock starts delaying very quickly. As soon as I stop the task, the clock starts re-synch. Note that the daemon is active all the time.
I think that xenomai is kind of excluding chrony o something like that, I'm not familiar with RT-linux unfortunately
Originally posted by mark_vision on ROS Answers with karma: 275 on 2014-05-20
Post score: 0
|
hello, i am studying mechatronic/robotics and I have to include Nao in Linux Ubuntu 10.04 fuerte 32bit. The napqi on Nap does not work because Nao is very old. He works with NaoQiAcademics-1.2.0-Linux. I think I need the pynaoqi-python2.7-naoqi that I can work with Nao in ROS fuerte? But I cannot actually download this from aldebaran site because they offert only new versions of naoqi. Can someone of you sent me a link for downloading this pynaoqi? thanks a lot for help.
Originally posted by Tajana on ROS Answers with karma: 11 on 2014-05-20
Post score: 1
Original comments
Comment by Vincent Rabaud on 2014-05-23:
what version of the robot do you have ? Aldebaran does not support such an old NAOqi anymore. You can't upgrade to 1.14 ?
Comment by Tajana on 2014-05-26:
I do not know what version. I have allready talked to aldebaran and they had told me that it is not possible to upgrade my Nao because of his age.
But I hopped that someone who also worked with older ones can give me a pynaoqi
for an old version of Nao.
|
I noticed that a plugin name of eband_local_planner is bgp_plugin.xml. Eband_local_planner is a kind of like local_planner. Why is the plugin named bgp_plugin.xml instead of blp_plugin.xml? Actually, if I rename bgp_plugin.xml to blp_plugin.xml, Errors happened.
Does anybody know the reason?
[ERROR] [1400628392.643232931]: Skipping XML Document "/home/turtlebot/catkin_ws/src/eband_local_planner/bgp_plugin.xml" which had no Root Element. This likely means the XML is malformed or missing.
[ERROR] [1400628392.761079585]: Skipping XML Document "/home/turtlebot/catkin_ws/src/eband_local_planner/bgp_plugin.xml" which had no Root Element. This likely means the XML is malformed or missing.
[ERROR] [1400628392.880693695]: Skipping XML Document "/home/turtlebot/catkin_ws/src/eband_local_planner/bgp_plugin.xml" which had no Root Element. This likely means the XML is malformed or missing.
[ INFO] [1400628394.619906326]: Loading from pre-hydro parameter style
[ INFO] [1400628394.719745267]: Using plugin "static_layer"
[ INFO] [1400628394.886922629]: Requesting the map...
[ INFO] [1400628395.115048446]: Resizing costmap to 4000 X 4000 at 0.050000 m/pix
[ INFO] [1400628395.654797784]: Received a 4000 X 4000 map at 0.050000 m/pix
[ INFO] [1400628395.674817370]: Using plugin "obstacle_layer"
[ INFO] [1400628395.701033172]: Subscribed to Topics: scan bump
[ INFO] [1400628395.861715994]: Using plugin "inflation_layer"
[ INFO] [1400628396.344856710]: Loading from pre-hydro parameter style
[ INFO] [1400628396.474822909]: Using plugin "obstacle_layer"
[ INFO] [1400628396.646287882]: Subscribed to Topics: scan bump
[ INFO] [1400628396.803163198]: Using plugin "inflation_layer"
[FATAL] [1400628396.933781606]: Failed to create the eband_local_planner/EBandPlannerROS planner, are you sure it is properly registered and that the containing library is built? Exception: According to the loaded plugin descriptions the class eband_local_planner/EBandPlannerROS with base class type nav_core::BaseLocalPlanner does not exist. Declared types are base_local_planner/TrajectoryPlannerROS dwa_local_planner/DWAPlannerROS pose_follower/PoseFollower
[move_base-5] process has died [pid 13705, exit code 1, cmd /home/turtlebot/catkin_ws/devel/lib/move_base/move_base cmd_vel:=navigation_velocity_smoother/raw_cmd_vel __name:=move_base __log:=/home/turtlebot/.ros/log/e406e8be-e075-11e3-b207-dc85de8a0cd2/move_base-5.log].
log file: /home/turtlebot/.ros/log/e406e8be-e075-11e3-b207-dc85de8a0cd2/move_base-5*.log
Originally posted by Ken_in_JAPAN on ROS Answers with karma: 894 on 2014-05-20
Post score: 0
|
When I run: roslaunch sam_moveit_wizard_generated move_group.launch
It says:
[ INFO] [1400647657.748106413, 748.461000000]: Using planning request adapter 'Fix Start State Path Constraints'
[FATAL] [1400647657.809613293, 748.476000000]: Parameter '~moveit_controller_manager' not specified. This is needed to identify the plugin to use for interacting with controllers. No paths can be executed.
[ INFO] [1400647657.826129681, 748.479000000]: Trajectory execution is managing controllers
and I try to run:
sam@sam:~/code/groovy_overlay/src/sam_moveit_learning/bin$ rosrun sam_moveit_learning pose right_arm 0.7 -0.2 0.7 0 0 0
[ INFO] [1400647661.271172191, 749.458000000]: Ready to take MoveGroup commands for group right_arm.
[ INFO] [1400647661.271335019, 749.458000000]: Move to : x=0.700000, y=-0.200000, z=0.700000, roll=0.000000, pitch=0.000000, yaw=0.000000
[ INFO] [1400647662.016181981, 749.640000000]: ABORTED: Solution found but controller failed during execution
^Csam@sam:~/code/groovy_overlay/src/sam_moveit_learning/bin$
I also got this error:
[ INFO] [1400647661.483935586, 749.503000000]: Path simplification took 0.016063 seconds
[ERROR] [1400647662.013314708, 749.640000000]: Unable to identify any set of controllers that can actuate the specified joints: [ r_elbow_flex_joint r_forearm_roll_joint r_shoulder_lift_joint r_shoulder_pan_joint r_upper_arm_roll_joint r_wrist_flex_joint r_wrist_roll_joint ]
[ERROR] [1400647662.013418353, 749.640000000]: Apparently trajectory initialization failed
I try to fix it by install but not works:
sam@sam:~/code/groovy_overlay/src/sam_moveit_learning/bin$ sudo apt-get install ros-groovy-pr2-moveit-plugins
[sudo] password for sam:
Reading package lists... Done
Building dependency tree
Reading state information... Done
ros-groovy-pr2-moveit-plugins is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
sam@sam:~/code/groovy_overlay/src/sam_moveit_learning/bin$
How to solve it?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2014-05-20
Post score: 2
|
I measured a time to convert sensor_msgs::PointCloud2, pcl::PointCloud and pcl::PCLPointCloud2.
https://github.com/garaemon/pcl_ros_conversion_benchmark
I found that converting from pcl::PCLPointCloud2 to pcl::PointCloud takes a long time when the input data had RGB field.
Is there any way to speed up this conversion?
Originally posted by Ryohei Ueda on ROS Answers with karma: 317 on 2014-05-20
Post score: 1
|
Hello,
I have been working on a node using a normal python IDE (spyder). Everything works fine, I can import all the needed python modules, some installed in /usr/local/lib and other in /usr/lib .
When I run my code with rosrun,
from withings import WithingsAuth, WithingsApi
ImportError: cannot import name WithingsAuth
withings is a module that was installed under
Does somebody know how to fix this?
Originally posted by Mehdi. on ROS Answers with karma: 3339 on 2014-05-20
Post score: 0
|
After a recent update of hydro the fixed frame in the tf in rviz is disappearing after the timeout time. Is this the intended behavior or is it a bug that should be reported?
Originally posted by TommyP on ROS Answers with karma: 1339 on 2014-05-20
Post score: 0
|
When i type the apt-get install command, i get ,
sudo apt-get install ros-hydro-desktop-full
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
ros-hydro-desktop-full : Depends: ros-hydro-desktop but it is not going to be installed
Depends: ros-hydro-perception but it is not going to be installed
Depends: ros-hydro-simulators but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
Originally posted by amartya18x on ROS Answers with karma: 1 on 2014-05-21
Post score: 0
|
Hi,
This might be a question that has been asked long time ago. But, through my observation, there is still no significant answer to this question.
My experiment requires me to start and stop a robot simulation several times. I manage to start the simulation by calling roslaunch files. After a period of time, I need the simulation to stop automatically. So, ros::shutdown() is executed at all nodes. However, rosout keeps re-spawn as stated below, which hinders another simulation to be run.
[rosout-1] restarting process
process[rosout-1]: started with pid [31335]
[ERROR] [1400669233.337848698]: [registerService] Failed to contact master at [localhost:11311]. Retrying...
Could someone please help me to solve this problem?
Thank you.
Originally posted by faisal on ROS Answers with karma: 1 on 2014-05-21
Post score: 0
|
I'm using ros hydro.
I call a persistence service in this way:
connectToClassificationServer()
{
classificationService = n.serviceClient<c_fuzzy::Classification>("classification", true);
}
then I use my service:
if (classificationService.isValid())
{
classificationService.call(serviceCall);
}
else
{
ROS_ERROR("Service down, waiting reconnection...");
classificationService.waitForExistence();
connectToClassificationServer(); //Why this??
}
the method isValid return always false if I don't re-create the service handler. And apparently there's no way to restart a persistent service (such as a 'reconnect' method).
it is correct to always recreate the ServiceClient? why I need to use the nodeHandler if I've already all the information needed inside the ServiceClient object?
Originally posted by Boris_il_forte on ROS Answers with karma: 96 on 2014-05-21
Post score: 1
|
Given I have an occupancy grid map which I would want to transform with a certain [tx,ty,theta] transformation. How do I do it? The tf tree is posted below and also the rviz which contains the grid map. My ultimate goal is that I would have two occupancy grid maps which I would have to apply a transformation to either one of them so that they can be overlapped -- Map Merging. I'm looking for the mechanism in ROS where I can transform the map, particularly the whole /tf frame so that even the robot's pose will be transformed into a new coordinate frame.
http://postimg.org/image/d4xd3d39p/
http://postimg.org/image/vrdqce6x9/
Originally posted by Xegara on ROS Answers with karma: 52 on 2014-05-21
Post score: 0
Original comments
Comment by xuningy on 2014-05-28:
Hi Xegara, I am looking at your /tf frames in the first image, and I am wondering how you were able to link /camera_link to /map? What launch files did you use to connect octomap and rgbdslam? Are you able to share what code in the .launch file (or your method of connecting the two and projecting the map?) Thank you.
Comment by Xegara on 2014-06-21:
You can run the following commands:
roscore
roslaunch rgbdslam slow_computer.launch
roslaunch rgbdslam octomap_server.launch
roslaunch openni_launch openni.launch
The rgbdslam node itself also creates the tf tree I uploaded in the first image.
|
I am new to ROS. I have a robot, which collects data through its sensor. And I have a laptop with ROS residing on Ubuntu. I may want to send the sensor data to the laptop through a router. Is there anyway to implement this. The robot does not have ROS on it. It has a tailored Fedora on it.
Many thanks
Yoshi
Originally posted by Yoshida on ROS Answers with karma: 1 on 2014-05-21
Post score: 0
|
Using this code
geometry_msgs::PoseStamped current_position()
{
tf::TransformListener my_temp_listener;
geometry_msgs::PoseStamped pose;
pose.header.stamp = ros::Time::now();
pose.header.frame_id = "/base_link";
pose.pose = pose_zero;
my_temp_listener.waitForTransform("map", "/base_link", ros::Time(0), ros::Duration(3.0));
tf::StampedTransform my_transform;
my_temp_listener.lookupTransform("/map","/base_link",ros::Time(0), my_transform);
my_temp_listener.transformPose("map", pose, pose);
return pose;
}
I get terminate called after throwing an instance of tf::ExtrapolationException
what(): Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] at runtime.
However, looking at the frames using rosrun tf tf_echo /base_link /map I correctly get
At time 1400687504.239
Translation: [-8.817, 0.695, 0.000]
Rotation:
- in Quaternion [0.000, 0.000, 0.097, 0.995]
- in RPY [0.000, -0.000, 0.194]
Thank you!
Originally posted by Rahndall on ROS Answers with karma: 133 on 2014-05-21
Post score: 0
|
I am trying to use the TF transforms from Openni_tracker to detect the presence of a person. I want to perform an action when a "user" is detected.
Is this possible - I'm aware that openni_tracker publishes "New User" and "Pose Psi" detected but this isn't published to a topic so I'm not sure if it is possible to do.
Originally posted by rmb209 on ROS Answers with karma: 80 on 2014-05-21
Post score: 0
|
Using rosjava, how would one get the timestamp and other metadata for a message?
Answering in the form of an adaptation of the Listener class in the rosjava tutorial would be particularly useful.
Thanks!
Originally posted by bradknox on ROS Answers with karma: 101 on 2014-05-21
Post score: 1
|
I'm a beginner about a migration. If you are familiar with migration from rosbuild to catkin, Could you teach me how to migrate from rosbuild to catkin.
This is an example of eband_local_planner. I edited CMakeLists.txt as following.
Is this correct? Now I use ROS hydro on Ubuntu12.04.
cmake_minimum_required(VERSION 2.8.3)
project(eband_local_planner)
# Before this can be catkinized, the control_toolbox needs to be catknized.
find_package(catkin REQUIRED
COMPONENTS
roscpp
pluginlib
nav_core
costmap_2d
base_local_planner
nav_msgs
geometry_msgs
visualization_msgs
tf
tf_conversions
angles
control_toolbox
actionlib
eigen
)
find_package(Boost REQUIRED
COMPONENTS
thread
)
find_package(Eigen REQUIRED)
include_directories(
include
${catkin_INCLUDE_DIRS}
${EIGEN_INCLUDE_DIRS}
)
add_library(eband_local_planner
src/conversions_and_types.cpp
src/eband_action.cpp
src/eband_local_planner.cpp
src/eband_local_planner_ros.cpp
src/eband_trajectory_controller.cpp
src/eband_visualization.cpp
)
target_link_libraries(eband_local_planner
${catkin_LIBRARIES}
${Boost_LIBRARIES}
)
install(TARGETS eband_local_planner
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
)
install(DIRECTORY include/${PROJECT_NAME}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
PATTERN ".svn" EXCLUDE
)
install(FILES blp_plugin.xml
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
package.xml is written on this web( https://github.com/ros-planning/navigation_experimental/tree/hydro-devel/eband_local_planner).
If you have an experiment of catkinizing eband_local_planner, I want to know more information.
The reason I put this article is I wanted to resolve an error when I moved a turtlebot with eband_local_planner.
Actually, I have compiling error.
CMake Error at eband_local_planner/CMakeLists.txt:53 (install):
install TARGETS given no LIBRARY DESTINATION for shared library target
"eband_local_planner".
CMake Error at eband_local_planner/CMakeLists.txt:62 (install):
install FILES given no DESTINATION!
Originally posted by Ken_in_JAPAN on ROS Answers with karma: 894 on 2014-05-21
Post score: 0
Original comments
Comment by Ken_in_JAPAN on 2014-05-21:
Before catkinizing a package, Is it better to rosbuild the package of fuerte version?
|
I've got a python node which runs a service server. Catkin builds it fine and builds the service files (works fine for another package in C++ which uses the same .srv file). Even though it builds the file fine, at runtime the node can't import it. If I revise the python search path as shown:
PYTHONPATH='/home/blake/Projects/Ros/catkin_ws/devel/lib/python2.7/dist-packages/retractor_ros/srv/':$PYTHONPATH
it works. Here's my CMakelist.txt
cmake_minimum_required(VERSION 2.8.3)
project(retractor_ros)
set(BH_LIB_NAME XXXXXXXXXXX)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRE COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
rospy
roscpp
std_msgs
message_generation
${BH_LIB_NAME}
)
## Generate services in the 'srv' folder
add_service_files(
FILES
PhidgetsServoCommand.srv
# Service1.srv
# Service2.srv
)
## Generate added messages and services with any dependencies listed here
generate_messages(
DEPENDENCIES
std_msgs
)
catkin_package(
INCLUDE_DIRS
# LIBRARIES ${BH_LIB_NAME}
CATKIN_DEPENDS message_runtime rospy roscpp std_msgs rstate_machine
# DEPENDS system_lib
)
###########
## Build ##
###########
include_directories(
${catkin_INCLUDE_DIRS}
# /opt/ros/hydro/include/turtle_actionlib
/home/blake/Projects/Ros/catkin_ws/devel/include/retractor_ros
/home/blake/Projects/Ros/catkin_ws/src/pwm_ros
/home/blake/Projects/Ros/catkin_ws/src/rstate_machine/include
)
## Declare a cpp executable
add_executable(retractor_ros_node retractor_fsm.cpp keyinput.cpp)
target_link_libraries(retractor_ros_node ${BH_LIB_NAME} ${catkin_LIBRARIES})
add_dependencies(${PROJECT_NAME}_node servo_fsm_generate_messages_cpp keyinput.cpp rstate_machine.cpp)
install(PROGRAMS
scripts/pwm_servo.py
../devel/lib/retractor_ros/retractor_ros_node
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
## Mark executables and/or libraries for installation
install(TARGETS retractor_ros_node
# ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
Originally posted by blakeh on ROS Answers with karma: 17 on 2014-05-21
Post score: 0
|
Hi Guys
I tried to run gscam to get image from my camera but when I typed
$ roslaunch gscam v4l.launch
I got these errs:
[FATAL] [1400738699.774908523]: Failed to PAUSE stream, check your gstreamer configuration.
[FATAL] [1400738699.775082184]: Failed to initialize gscam stream!
and when I typed :
$rosrun gscam gscam
I got these errs:
[FATAL] [1400738785.074504579]: Problem getting GSCAM_CONFIG environment variable and 'gscam_config' rosparam is not set. This is needed to set up a gstreamer pipeline.
[FATAL] [1400738785.074717553]: Failed to configure gscam!
How can I fix these?
any suggestion about this?
thanks
hamid
Originally posted by Hamid Didari on ROS Answers with karma: 1769 on 2014-05-21
Post score: 2
|
Hello,
is there a way to visualize services in rqt? Like it is done with topics?
I'm using groovy and Python
Originally posted by Mehdi. on ROS Answers with karma: 3339 on 2014-05-21
Post score: 1
|
I looked for it in the tutorials and online but I didn't find anything helpful. After uncommenting the line srv_gen in CMakeFiles.txt and running rosmake, some files are generated including headers for C++ put in include folder and Python scripts which are put into the folder packagename/src/packagename. The generated Python module's name was _myServiceMessagesFile.py for myServiceMessagesFile.srv being my service's messages declaration file that I put in packagename/srv.
Then the tutorial says to use "import myServiceMessagesFile.srv" which doesn't work.
I solved this problem by taking the generated .py file from packagename/src/packagename folder and putting it in the same folder as my Python script (for example packagename/scripts) needing this messages, but I feel that this method is more a dirty fix than an elegant solution.
I work with ros groovy and rosbuild. Can somebody give me some hints on what to do?
Originally posted by Mehdi. on ROS Answers with karma: 3339 on 2014-05-21
Post score: 0
|
Is it possible to use git submodules in repositories for releasing packages via bloom?
I used it in a package and it succeeded for the devel-build (even on Jenkins), but now the build seems to fail for the binarydeb-jobs because it can't find files from the submodule.
Thanks and best regards,
Sebastian
Originally posted by Sebastian Kasperski on ROS Answers with karma: 1658 on 2014-05-21
Post score: 1
|
I have two different pieces of hardware (grippers) that are run by an action server. Their behavior is slightly different, and I need to write a piece of software that can run both grippers using the same syntax. So I need something like
open_gripper(distance)
close_gripper(distance)
and just need to specify somewhere which piece of hardware we are using.
Now someone suggested using a service to perform the action - the service performs the action in the appropriate way, and returns some feedback, and responds to the main program.
However, I'm still not sure when is the right time to use actions, and when is the right time to use a service. Based on what I've read, this seems like an upside down approach, but I'm not sure. Are there any examples of this sort of setup, or is it considered bad form?
Originally posted by paturdc on ROS Answers with karma: 157 on 2014-05-21
Post score: 0
|
Hello,
I'm looking forward to use ROS Indigo with the fresh released Ubuntu 14.04. Gazebo is starting fine but instantly crashes when spawning a robot via service call:
(indigo) alex@ThinkPad-T440p:~/hector$ rosrun gazebo_ros spawn_model -file `rospack find baxter_description`/urdf/baxter.urdf -urdf -z 1 -model baxter
spawn_model script started
[INFO] [WallTime: 1400760484.921875] [0.000000] Loading model xml from file
[INFO] [WallTime: 1400760484.922486] [0.000000] Waiting for service /gazebo/spawn_urdf_model
[INFO] [WallTime: 1400760484.924802] [0.000000] Calling service /gazebo/spawn_urdf_model
Service call failed: transport error completing service call: unable to receive data from sender, check sender's logs for details
Lets Gazebo crash without any message:
[gazebo-1] process has died [pid 14649, exit code 139, cmd /opt/ros/indigo/lib/gazebo_ros/gzserver worlds/empty.world __name:=gazebo __log:=/home/alex/.ros/log/cdde1b80-e1a0-11e3-ab05-5c514ff707c4/gazebo-1.log].
log file: /home/alex/.ros/log/cdde1b80-e1a0-11e3-ab05-5c514ff707c4/gazebo-1*.log
I've just tested it with Gazebo 2.2.2 (sudo apt-get install ros-indigo-gazebo-ros) using "empty_world" and the Baxter robot urdf (see gazebosim.org /Tutorials/1.9/Using_roslaunch_Files_to_Spawn_Models, sry not enough Karma for links ;)).
Has anybody encountered this issue too?
kind regards,
Alex
Originally posted by a_stumpf on ROS Answers with karma: 41 on 2014-05-22
Post score: 3
Original comments
Comment by a_stumpf on 2014-06-02:
It's still a big issue. Hasn't anybody else this issue with Ubuntu 14.04 and indigo?
Comment by fhurlbrink on 2014-06-13:
I have the same problem!
Comment by 4dahalibut on 2014-07-11:
I think this problem might have to deal with it being a thinkpad. I have the same problem with Hydro and 12.04 and both Gazebo 1.9 and Gazebo 3
|
Hi there,
I do have a question concerning the simulation in stage. I read in the documentation that no noise is being simulated, nevertheless my results vary. As I run my node (exploration --> which depends on position and laser scan) on stage, my results like the required exploration time differ highly. Does someone have an idea what causes such behaviors?
Thanks in advance,
Daniel
Originally posted by dneuhold on ROS Answers with karma: 205 on 2014-05-22
Post score: 0
|
Hello everybody! I want to use amcl on my husky simulator. I created a map with gmapping and trying to use amcl with rviz and 2d pose estimate button, but it doesn't happen anything. can you help me? I just want to let the husky simulator (gazebo) do an autonomous navigation.
Originally posted by alex920a on ROS Answers with karma: 35 on 2014-05-22
Post score: 0
|
I've beem unsing ROSBridge 2.0 to communicate with ros, and i'm trying to get the topic list using the following JSON message:
{
"op": "call_service",
"service": "/rosapi/topics"
}
but i get an error on ros with the following text : "call_service InvalidServiceException: Service /rosapi/topics does not exist".
I can call services from rosservice but not from rossrv...
What am i doing wrong?
Originally posted by Sky--- on ROS Answers with karma: 46 on 2014-05-22
Post score: 1
Original comments
Comment by jihoonl on 2014-05-22:
How did you start the rosbridge? did you rosrun or roslaunch?
|
Now that catkin is building my service files OK, I can't run it. The relevant code section is:
rospy.init_node('servo_cmd_server', log_level=rospy.INFO)
rospy.loginfo("Started Phidgets Servo Command Server Node")
rospy.sleep(2.0)
rospy.loginfo("Setting up the command service")
rospy.loginfo("Namespace: %s", rospy.get_namespace())
rospy.loginfo("getname: %s", rospy.get_name())
rospy.loginfo("Resolved name: %s", rospy.resolve_name("PhidgetsServoCommand", caller_id="retractor_ros"))
s = rospy.Service('retractor_cmd', _PhidgetsServoCommand, handle_servo_cmd)
print 'Ready to receive servo commands!!!.'
rospy.spin()
And the relevant output is:
[INFO] [WallTime: 1400781040.063425] Namespace: /
[INFO] [WallTime: 1400781040.063572] getname: /servo_cmd_server
[INFO] [WallTime: 1400781040.063730] Resolved name: /PhidgetsServoCommand
Traceback (most recent call last):
File "/home/blake/Projects/Ros/catkin_ws/src/pwm_ros/scripts/pwm_servo.py", line 228, in
servo_cmd_server() # start the server (doesn't return)
File "/home/blake/Projects/Ros/catkin_ws/src/pwm_ros/scripts/pwm_servo.py", line 73, in servo_cmd_server
s = rospy.Service('retractor_cmd', _PhidgetsServoCommand, handle_servo_cmd)
File "/opt/ros/hydro/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 696, in __init__
super(Service, self).__init__(name, service_class, handler, buff_size)
File "/opt/ros/hydro/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 544, in __init__
super(ServiceImpl, self).__init__(name, service_class)
File "/opt/ros/hydro/lib/python2.7/dist-packages/rospy/service.py", line 59, in __init__
self.request_class = service_class._request_class
AttributeError: 'module' object has no attribute '_request_class'
Any ideas?? BTW, catkin has prefixed the MyName.srv file with an underscore to create _MyName.py . without the underscore, you get:
[INFO] [WallTime: 1400781520.326304] Namespace: /
[INFO] [WallTime: 1400781520.326438] getname: /servo_cmd_server
[INFO] [WallTime: 1400781520.326581] Resolved name: /PhidgetsServoCommand
Traceback (most recent call last):
File "/home/blake/Projects/Ros/catkin_ws/src/pwm_ros/scripts/pwm_servo.py", line 228, in
servo_cmd_server() # start the server (doesn't return)
File "/home/blake/Projects/Ros/catkin_ws/src/pwm_ros/scripts/pwm_servo.py", line 73, in servo_cmd_server
s = rospy.Service('retractor_cmd', PhidgetsServoCommand, handle_servo_cmd)
NameError: global name 'PhidgetsServoCommand' is not defined
Thanks!
Originally posted by blakeh on ROS Answers with karma: 17 on 2014-05-22
Post score: 0
|
Hello,
Had anyone had any luck getting the MPU9150 9-DOF chip to work well with ROS on the beagle bone black running Ubuntu and ROS Hydro? I'm aware of BB_ROS, but wanted to try to stick to running Ubuntu not Angstrom.
Thanks
Originally posted by illogical_vectors on ROS Answers with karma: 1 on 2014-05-22
Post score: 0
|
Hi everybody!
I tried to use the rosTextView of Android to display messages from a topic. The publisher is on my computer (ROS Hydro, rqt).
There is no effect (the application doesn't crash), I just get this error :
E/UpdatePublisherRunnable﹕ org.ros.internal.node.xmlrpc.XmlRpcTimeoutException: org.apache.xmlrpc.client.TimingOutCallback$TimeoutException: No response after waiting for 10000 milliseconds.
I/dalvikvm﹕ Jit: resizing JitTable from 4096 to 8192
rosTextView = (RosTextView<std_msgs.String>) findViewById(R.id.text);
rosTextView.setMessageType(std_msgs.String._TYPE);
rosTextView.setTopicName("/textView");
rosTextView.setMessageToStringCallable(new MessageCallable<String, std_msgs.String>() {
@Override
public String call(std_msgs.String message) {
return "ok";
}
});
This code worked last month and since I might have update some part of ROS, there is a bug.
Have anyone experience this case?
Thank you
Originally posted by thomasL on ROS Answers with karma: 36 on 2014-05-22
Post score: 0
Original comments
Comment by Daniel Stonier on 2014-05-25:
Hasn't been any changes in hydro, rosjava nor I suspect roscpp. Has your android build environment/sdk changed?
|
Hi,
I'm trying to build a package that uses libeigen3, but there is no stacks. "roscd eigen" shows that it isn't there. Searching reveals that stacks for hydro isn't there while at the same time it seems it can be installed manually. So how should I proceed ?
Thanks
Originally posted by hvn on ROS Answers with karma: 72 on 2014-05-22
Post score: 0
|
In Linux, my .bashrc had only one line pertaining to ROS:
source /opt/ros/indigo/setup.bash
Yet I was getting the following error message upon opening a new shell:
ROS_DISTRO was set to 'hydro' before. Please make sure that the environment does not mix paths from different distributions.
It turns out that due to some shell caching or something, ROS_DISTRO was stuck on 'hydro' even after removing the source line from the .bashrc. This problem was solved by simply logging out and back in.
Originally posted by wayne on ROS Answers with karma: 26 on 2014-05-22
Post score: 1
|
I'd like to communicate over serial between two computers running full ros. All the tutorials are for embedded devices and arduinos. I feel like this has to be simple, but I'm clearly missing something and your help is greatly appreciated.
I've tried running the "rosrun rosserial_python serial_node.py" on both computers simultaneously, but the serial nodes are unable to establish a connection and instead throwing an error. I'm guessing that this is because you need the roscore on one computer to run the rosserial host, while the other computer is the client? I've tried having just one computer run the rosserial_python serial_node.py, assuming the other computer automatically generates the client library upon initialization of any new nodes once you've installed rosserial, but alas this doesn't work (a.k.a. I am unable to publish/ subscribe to topics across computers). Do I need to write custom code for the client computer or something?
My setup: A pc running Ubuntu 12.04 with full ROS, a Beaglebone Black running Ubuntu 12.04 with full ROS, serial communication via 3DR Radios (USB0 on the pc and UART ttyO1 on the Beaglebone Black). rosserial is installed in both computers.
Originally posted by ConsciousCode on ROS Answers with karma: 11 on 2014-05-22
Post score: 1
Original comments
Comment by lanyusea on 2014-05-22:
must you use the serial for communication? I met a similar problem and my solution is using your pc to host a hotpoint then let your BB join the network, then just treat them as http://wiki.ros.org/ROS/Tutorials/MultipleMachines but it will take an extra USB port. Maybe you can use the LAN bridge
Comment by ConsciousCode on 2014-06-09:
So to enable wireless communication, I assume you're talking about using Wifi 802.11? I was hoping to take advantage of the additional range offered by a 3DR radio I have that communicates via a serial connection. Wifi certainly would be an alternative solution.
Comment by lanyusea on 2014-06-09:
if you are using the Radio module from 3DR, I suggest you try the package RosCopter because of the MAVLink protocal
Comment by ConsciousCode on 2014-06-09:
Thank you for the suggestion. I'm attempting a non-standard configuration in which the 3DR radio is connected to a BeagleBone rather than the Ardupilot. Therefore, I'm no longer using the MAVLink protocol, but instead trying to send ROS messages through the beaglebone serial port via the radio.
|
As I too have a Phantomx Pincher Arm and Turtlebot(create), and am exploring cdrwolfe's phantomx package defined in http://answers.ros.org/question/102426/phantomx-pincher-moveit/. So that I can use urdf_to_graphix to get a visual urdf, I encountered a parsing error msg doing a check_urdf on the phantomx.urdf as well as the same error msg creating a new urdf with xacro.py.
"Error: Failed to build tree: parent link [plate_top_link] of joint [arm_base_joint] not found. This is not valid according to the URDF spec. Every link you refer to from a joint needs to be explicitly defined in the robot description. To fix this problem you can either remove this joint [arm_base_joint] from your urdf file, or add "<link name="plate_top_link" />" to your urdf file.
at line 226 in /tmp/buildd/ros-hydro-urdfdom-0.2.10-3precise-20140303-2236/urdf_parser/src/model.cpp
ERROR: Model Parsing the xml failed."
The <!-- joints -- > part of the phantomx_macro.xacro file for the arm_base_joint lists a <parent link="plate_top_link" which does not appear anywhere else in the phantomx_macro..xacro.
</link>
<!-- joints -->
<joint name="${prefix}arm_base_joint" type="fixed">
<origin xyz="${M_SCALE*14.5} ${M_SCALE*0.2} ${M_SCALE*0.2}" rpy="${M_PI*0} ${M_PI*0} ${-M_PI/2}" />
<parent link="plate_top_link" />
<child link="${prefix}arm_base_link"/>
<axis xyz="0 1 0" />
</joint>
What else must I include in my process to generate the visual complete arm urdf? I very much look forward to further exploring this very comprehesive package and watching further development so that I can get my Pincher Arm up and running in Hydro with basic arbotix_gui then learning to use MoveIt!.
Ross
Originally posted by RobotRoss on ROS Answers with karma: 141 on 2014-05-22
Post score: 0
|
Launching turtlebot_playground on indigo (Ubuntu Trusty) throws up the errors:
[ERROR] [1400811137.391203936]: Failed to load nodelet [/cmd_vel_mux] of type [yocs_cmd_vel_mux/CmdVelMuxNodelet]: Could not find library corresponding to plugin yocs_cmd_vel_mux/CmdVelMuxNodelet. Make sure the plugin description XML file has the correct name of the library and that the library actually exists.
[FATAL] [1400811137.391468550]: Service call failed!
[ERROR] [1400811137.540924502]: Failed to load nodelet [/depthimage_to_laserscan] of type [depthimage_to_laserscan/DepthImageToLaserScanNodelet]: Could not find library corresponding to plugin depthimage_to_laserscan/DepthImageToLaserScanNodelet. Make sure the plugin description XML file has the correct name of the library and that the library actually exists.
[FATAL] [1400811137.541236510]: Service call failed!
despite building both yocs_cmd_vel and depthimage_to_laserscan libraries.
What more should I do to prevent the above errors? Or do you recommend me to downgrade to hydro (if that's possible on Trusty)?
Originally posted by PKG on ROS Answers with karma: 365 on 2014-05-22
Post score: 0
Original comments
Comment by Lily1 on 2017-04-18:
Hi, I am having exactly the same problem. Did sourcing the bash file solve the problem for you? Thank you!
|
Hello guys!
I am a ros and cpp beginner and I have read the simple ros publisher tutorial given in the wiki. I want to create a simple publisher that can publish time every second. how can I modify the code given in the tutorial to do the same.
Originally posted by keshav_sarraf on ROS Answers with karma: 66 on 2014-05-22
Post score: 0
|
I tried to make actionlib client/server program on ROS Indigo/Ubuntu 14.04,
but however, I've got a trouble that my actionlib client cannot connect to its server
because the "ActionClient::waitForActionServerToStart()" will be always failed.
And also, this problem occurred in the "actionlib_tutorial" in ROS Indigo
(ros-indigo-actionlib-tutorials).
By tracing the debug messages of ROS console,
I found that the actionlib server tries to make connection
but the client gets no response from status/feedback/result topics.
In the "/opt/ros/indigo/include/actionlib/action_client.h",
the function "ActionClient::statusCb()" was registered but never called at all.
so that, in the "actionlib/src/connection_monitor.cpp", the function "ConnectionMonitor::processStatus()" could not notify its condition.
then, in the "ConnectionMonitor::waitForActionServerToStart()", the variable "check_connection_condition_" has never success of its "timed_wait()".
in the result, ActionClient::waitForActionServerToStar() will be failed.
In ROS Groovy, my program and "actionlib_tutorial" have worked fine.
What should I do?
Thank you.
Originally posted by ST-Lab on ROS Answers with karma: 15 on 2014-05-22
Post score: 1
|
Hello,
I'm using rosbridge 2.0 to connect to a remote computer and call a ROS service on the remote machine. I'm using the sample client below (from the the rosbride website) to make the connection and call the service:
from json import dumps
from ws4py.client.threadedclient import WebSocketClient
class GetLoggersClient(WebSocketClient):
def get_loggers(self):
msg = {'op': 'call_service', 'service': '/rosout/get_loggers'}
self.send(dumps(msg))
def opened(self):
print "Connection opened..."
self.get_loggers()
def closed(self, code, reason=None):
print code, reason
def received_message(self, m):
print "Received:", m
if __name__=="__main__":
try:
ws = GetLoggersClient('ws://127.0.0.1:9090/')
ws.connect()
except KeyboardInterrupt:
ws.close()
My question is how I can receive the output message of the service call. The output message is printed in the console by received_message(self,m), but I intend to assign this output to a variable.
Thanks
Originally posted by ROSCMBOT on ROS Answers with karma: 651 on 2014-05-22
Post score: 0
|
hi,
I have problem, when I write on my terminal : sudo apt-get install ros-launch-openni-launch
I have this message : "Can not find the paquet ros-launch-openni-launch"
I have installed openni, Nite, Sensorkinect without problem, and i had follow the wiki ros to instal ros on linux 12.04.
What is the problem, and how can i resolve this?
thanks
Originally posted by guigui on ROS Answers with karma: 33 on 2014-05-22
Post score: 0
|
Hi
I was reading up on ROS's navigation stack as well as amcl 's documentation when I realized from the stack's diagram
that move_base does not subscribe to /amcl_pose , which amcl uses to publish the robot's estimated pose in the map. In that case how does move_base update the current localized pose of the robot ?
Thanks
Originally posted by zenifed on ROS Answers with karma: 93 on 2014-05-22
Post score: 0
|
Hi!
I'm new of ROS and for an university project i have to grab a cup with a robot.
For first i have to understand where the cup is.
I have implemented an algorithm that segment the cup from the rest of the scene, and then it calculate the kaypoints of the new image. At this point i can determinate the "central" of my keypoints, that is broadly the centre of my cup.
The problem now is that till here i have worked with openCV and with the sensor_msgs/image, but to understand the depth of my pixel (the cup position) i need the registered depth topic.
What i couldn't understand it's: How i can subscribe the /camera/depth_registered/points (sensor_msgs/PointCloud2) to extract the RGB image (for the procedure that till here i did with sensor_msgs/image) and the depth image to know the depth of my task's pixel?
thanks
Luca
UPDATE
this is my callback
void callback(const sensor_msgs::PointCloud2::ConstPtr& msg) {
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::fromROSMsg (*msg, *cloud);
cv::Mat imageFrame;
if (cloud->isOrganized()) {
imageFrame = cv::Mat(cloud->height, cloud->width, CV_8UC3);
for (int h=0; h<imageFrame.rows; h++) {
for (int w=0; w<imageFrame.cols; w++) {
pcl::PointXYZRGB point = cloud->at(w, h);
Eigen::Vector3i rgb = point.getRGBVector3i();
imageFrame.at<cv::Vec3b>(h,w)[0] = rgb[2];
imageFrame.at<cv::Vec3b>(h,w)[1] = rgb[1];
imageFrame.at<cv::Vec3b>(h,w)[2] = rgb[0];
int i = centre_x + centre_y*cloud->width;
depth[call_count] = (float)cloud->points[i].z;
}
}
}
}//end callback
like you suggested to me.
i tried to put the depth of my pixel in a vector, so i thought that i could see at least some values different to 0 (for the noise of the kinect i know that it's possible that some depth values in a point could be 0 or near 0 sometimes). But in my case all values still 0.
Originally posted by lukeb88 on ROS Answers with karma: 33 on 2014-05-23
Post score: 2
|
I would like to mess around with the ROS navigation stack with a custom robot that I've built. I'm trying to figure out if it can be used to navigate in a previously unknown environment. The tutorial for setting up a robot to use the nav stack states that a map is not required. But this page on map building says that it does requires a static map. Aren't these statements contradictory? Which is true?
Originally posted by robzz on ROS Answers with karma: 328 on 2014-05-23
Post score: 2
Original comments
Comment by clyde on 2017-01-05:
Any luck on this?
|
Hello,
All of a sudden, I need to change my approach from calling TheVideoCapturer.open(0); (TheVideoCapturer is of a class VideoCapture) to open my internet camera (id 0) to subscribing to published image_raw topic. What are the easiest steps here? I'd like to use my TheVideoCapturer object still, because changing everything would be really troublesome.
Originally posted by delta785 on ROS Answers with karma: 72 on 2014-05-23
Post score: 0
|
Hi friends
I am new in ROS and I want do 3D map with my Kinect, I think a good option is with octomap_server, but I don't get it.
I think the easiest way would be using rviz but I can not get it. I think I must remap topic cloud_in to my sensor but I am not sure if it is going to run and I dont know the way to get it.
Does anybody know a tutorial? Help me please.
Originally posted by Rookie92 on ROS Answers with karma: 47 on 2014-05-23
Post score: 0
|
Hi.
I am working on ROS Indigo on Ubuntu 14.04 LTE Thrusty (OK, I know it is not yet released).
When I try to start Gazebo, it fails most of the time (1 time in 10, it works). I use the simple command:
$ roslaunch gazebo_ros empty_world.launch
Here are the messages:
$ roslaunch gazebo_ros empty_world.launch
... logging to /home/arnaud/.ros/log/251a3d96-e35b-11e3-9539-0016eae586be/roslaunch-hercules-11967.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://hercules:46195/
SUMMARY
========
PARAMETERS
* /rosdistro: <...>
* /rosversion: <...>
* /use_sim_time: True
NODES
/
gazebo (gazebo_ros/gzserver)
gazebo_gui (gazebo_ros/gzclient)
auto-starting new master
process[master]: started with pid [11979]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to 251a3d96-e35b-11e3-9539-0016eae586be
process[rosout-1]: started with pid [11992]
started core service [/rosout]
process[gazebo-2]: started with pid [12016]
/opt/ros/indigo/lib/gazebo_ros/gzserver: 5: [: Linux: unexpected operator
process[gazebo_gui-3]: started with pid [12020]
/opt/ros/indigo/lib/gazebo_ros/gzclient: 5: [: Linux: unexpected operator
Gazebo multi-robot simulator, version 2.2.2
Copyright (C) 2012-2014 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org
Gazebo multi-robot simulator, version 2.2.2
Copyright (C) 2012-2014 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org
Msg Waiting for masterError [Connection.cc:787] Getting remote endpoint failed
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::system::system_error> >'
what(): remote_endpoint: Transport endpoint is not connected
[ INFO] [1400946636.551182005]: Finished loading Gazebo ROS API Plugin.
Msg Waiting for master
[ INFO] [1400946636.557603492]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
Msg Connected to gazebo master @ http://127.0.0.1:11345
Msg Publicized address: 192.168.1.211
Aborted (core dumped)
[gazebo_gui-3] process has died [pid 12020, exit code 134, cmd /opt/ros/indigo/lib/gazebo_ros/gzclient __name:=gazebo_gui __log:=/home/arnaud/.ros/log/251a3d96-e35b-11e3-9539-0016eae586be/gazebo_gui-3.log].
log file: /home/arnaud/.ros/log/251a3d96-e35b-11e3-9539-0016eae586be/gazebo_gui-3*.log
[ INFO] [1400946637.091142150, 0.022000000]: waitForService: Service [/gazebo/set_physics_properties] is now available.
[ INFO] [1400946637.167613885, 0.093000000]: Physics dynamic reconfigure ready.
^C[gazebo-2] killing on exit
[rosout-1] killing on exit
[master] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
This one fails either:
roscore & rosrun gazebo_ros gazebo
The only way I found to make it work is to start one by one, roscore, gzserver and gzclient.
Does it come from the unstable state of ROS Indigo? Any idea how to fix this?
Thanks.
Originally posted by Arn-O on ROS Answers with karma: 107 on 2014-05-24
Post score: 0
Original comments
Comment by Arn-O on 2014-05-26:
Any ideas are welcome. Do you think that it could come from my hardware? (LENOVO T400)
Comment by Arn-O on 2014-05-27:
I have found the same issue on the Gazebo forum: http://answers.gazebosim.org/question/6271/getting-remote-endpoint-failed-let-gazebo-gui/
Comment by Arn-O on 2014-05-28:
Finally posted an issue in GitHub: https://github.com/ros/ros_comm/issues/421
Comment by Orso on 2014-09-11:
I experience same issue, running ROS Hydro, Ubuntu 12.04. roscore & rosrun gazebo_ros gazebo does not work, produces the following error: terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc
Yet, I am able to start gazebo using roscore, gzserver, and gzclient.
|
Hi there, i'm surfing the net for find the easiest way to get depth data, and ROS seem to be helpful for my target.
For my purpose, i need to get a grid of depth values (centimetres), i have to analyze the roughness of the soil.
I hope that my question it's clear.
Condor
Originally posted by Condor on ROS Answers with karma: 1 on 2014-05-24
Post score: 0
|
Get an error(see below) when trying to publish a float64 arm command.
Using Ubuntu 12.04.1 Ros Hydro
source file:
#include "ros/ros.h"
#include <std_msgs/Float64.h>
#include "pub/ArmPan.h"
pub::ArmPan panmsg;
int main(int argc, char **argv)
{
ros::init(argc, argv, "pub");
ros::NodeHandle n;
ros::Publisher ArmPan_pub = n.advertise<std_msgs::Float64>("ArmPan/command", 1000);
ros::Rate loop_rate(10);
int count = 0;
while(ros::ok())
{
panmsg.ArmPan = 1.0;
ArmPan_pub.publish(panmsg);
ros::spinOnce();
loop_rate.sleep();
++count;
}
return 0;
}
the file ArmPan.msg has one line:
float64 ArmPan
rosrun fails with this err:
[FATAL] Assertion failed
file = /opt/ros/hydro/include/ros/publisher.h
line 115
cond = impl_->md5sum == "*" || std::string(mt::md5sum<M>(message))
== "*" || impl_->md5sum == mt::md5sum<M>(message)
Trying to publish message of type [pub/ArmPan] on a publisher with type [std_msgs/Float64]
if the message
rostopic pub -1 ArmPan/command std_msgs/Float64 1
is entered, it works.
This code worked in earlier versions of ROS but now fails with Hydro. Can you tell me why?
Thanks.
Originally posted by garym on ROS Answers with karma: 13 on 2014-05-24
Post score: 1
|
Is ROS ready for Debian on the BBB Rev C? Looks like there are many unavailable packages still. ROScore would not run or couldn't be found. The step: "Install bootstrap dependencies" failed.
Originally posted by Rodolfo8 on ROS Answers with karma: 299 on 2014-05-25
Post score: 0
Original comments
Comment by ahendrix on 2014-05-25:
Which installation instructions are you following? There are lots of debs available for Ubuntu on BBB; is installing Ubuntu an option for you?
Comment by Rodolfo8 on 2014-05-26:
I must have made a mistake with some command. Today it is working flawlessly for Debian. It surely takes a while to prepare the environment on the wstool init command. That little BBB CPU is hot!
|
I ran across a problem, probably because I deleted one day some files related to libyaml. Anyway when I try to build gscam (ros package) I get this error.
/usr/bin/ld: warning: libyaml-cpp.so.0.2, needed by /opt/ros/hydro/lib/libcamera_calibration_parsers.so, not found (try using -rpath or -rpath-link)
/opt/ros/hydro/lib/libcamera_calibration_parsers.so: undefined reference to `YAML::Node::begin() const'
And I can't find where I could get it. I mean, I've downloaded 0.3 version from code google yaml-cpp and it works for my purposes. Any help?
Originally posted by delta785 on ROS Answers with karma: 72 on 2014-05-25
Post score: 0
|
Can not run rosserial via bluetooth.
Arduino Uno
Blutooth module HC-06
Notebook with bluetooth Ubuntu 12.04 Ros Hydro
Arduino sketch:
#include <ros.h>
#include <std_msgs/String.h>
ros::NodeHandle nh;
std_msgs::String str_msg;
ros::Publisher chatter("chatter", &str_msg);
char hello[13] = "hello world!";
void setup()
{
nh.getHardware()->setBaud(9600);
nh.initNode();
nh.advertise(chatter);
}
void loop()
{
str_msg.data = hello;
chatter.publish( &str_msg );
nh.spinOnce();
delay(1000);
}
In terminal:
sudo rfcomm connect 0 98:D3:31:20:03:19 1
Connection is established, the red LED on the Bluetooth lit.
In new terminal:
$ rosrun rosserial_python serial_node.py _port:=/dev/rfcomm0
[INFO] [WallTime: 1401040615.342558] ROS Serial Python Node
[INFO] [WallTime: 1401040615.353809] Connecting to /dev/rfcomm0 at 9600 baud
[ERROR] [WallTime: 1401040632.468633] Unable to sync with device; possible link problem or link software version mismatch such as hydro rosserial_python with groovy Arduino
How to fix this error? If connecting by the usb cable, everything works.
Originally posted by amburkoff on ROS Answers with karma: 26 on 2014-05-25
Post score: 0
Original comments
Comment by ahendrix on 2014-05-25:
Does serial over bluetooth work when you're not using a rosserial sketch? Is the baud rate on your bluetooth module set correctly?
Comment by amburkoff on 2014-05-26:
Yes I used to work with this Bluetooth module via serial data is transferred. Baud rate correct.
Comment by EwingKang on 2014-09-06:
Have you solve this? I'm having the same problem now. its annoying!
Comment by EwingKang on 2015-03-22:
Actually I've solve my problem. It truns out that my bluetooth chip is a china made cheap one and not a real HC-06. And there is something like only a single post in some unkown forum that provides the datasheet. Generally speaking, it's the fault of baud rate (not usable if exceed certain rate).
Comment by David FLD on 2022-06-05:
I uploaded the this code on my arduino nano to have a serial communication with the ros via Hc-05 bluetooth module. It worked only with the 9600 baudrate (not 57600).
|
hi, all,
I am super new to ROS.
I am trying to use tf to transform the laser input to tf base_link frame.
But I am constantly getting error msg like "MessageFilter [target=base_link ]: Dropped 100.00% of messages so far. Please turn the [ros.agv.message_notifier] rosconsole logger to DEBUG for more information."
and it seems the call back was never invoked. I tested the laser msg by subscribing to the raw msg and it works fine.
I think there might be a broken link somewhere in the settings.
Can someone help me? here's the code for the listener.
class LaserScanToPointCloud{
public:
ros::NodeHandle node;
laser_geometry::LaserProjection projector;
tf::TransformListener listener;
message_filters::Subscriber<sensor_msgs::LaserScan> laser_sub;
tf::MessageFilter<sensor_msgs::LaserScan> laser_notifier;
ros::Publisher scan_pub;
LaserScanToPointCloud(ros::NodeHandle n) :
node(n),
laser_sub(node, "base_scan", 10),
laser_notifier(laser_sub,listener, "base_link", 10)
{
printf("setting up callback\r\n");
laser_notifier.registerCallback(boost::bind(&LaserScanToPointCloud::scanCallback, this, _1));
laser_notifier.setTolerance(ros::Duration(0.01));
scan_pub = node.advertise<sensor_msgs::PointCloud>("my_cloud",1);
printf("set up callback\r\n");
}
void scanCallback (const sensor_msgs::LaserScan::ConstPtr& scan_in)
{
sensor_msgs::PointCloud localcloud;
printf("%ds:%lfm\r\n",scan_in->header.stamp.sec,scan_in->ranges[128]);
try
{
projector.transformLaserScanToPointCloud("base_link",*scan_in, localcloud,listener);
}
catch (tf::TransformException& e)
{
std::cout << e.what();
return;
}
printf("(%f,%f,%f)\r\n",localcloud.points.data()->x,localcloud.points.data()->y,localcloud.points.data()->z);
// Do something with cloud.
scan_pub.publish(localcloud);
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "my_scan_to_cloud");
ros::NodeHandle n;
LaserScanToPointCloud lstopc(n);
ros::spin();
return 0;
}
and here's the broadcaster
int main(int argc, char** argv){
ros::init(argc, argv, "robot_tf_publisher");
ros::NodeHandle n;
ros::Rate r(50);
tf::TransformBroadcaster broadcaster;
while(n.ok()){
broadcaster.sendTransform(
tf::StampedTransform(
tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.0, 0.0, 0.2)),
ros::Time::now(),"base_link", "base_scan"));
r.sleep();
}
return 0;
}
million thanks
Ray
Originally posted by dreamcase on ROS Answers with karma: 91 on 2014-05-25
Post score: 0
|
Hello,
I have some nodes that uses custom messages that I have in my workspace. In the package.xml I have declared it as depends (build and run), but if I clean the workspace when I use catkin_make to build everything it crash.
The crash is because the compiler try to build the node before generating the message and cannot find header files for the messages.
I think that I forgot something, but I cannot find a solution in the web.
Thanks you very much.
PD: I am using hydro.
Originally posted by Jonathan Ruiz on ROS Answers with karma: 26 on 2014-05-25
Post score: 0
|
I followed the tutorials at the page of hector slam
but the final map seems not right, you can find my map at www.dropbox.com/s/9072yf1f41nk18e/capture.png
Are there any other parameters I need to set?
I also use the bag "Team_Hector_MappingBox_RoboCup_2011_Rescue_Arena.bag" in gmapping following its tutorials
Also got the wrong map.
What should I do to solve this problem?
What's more, how to generate a map simultaneously without a bag file?I just have a notebook and a hokuyo laser scanner(ulg-04lx), can any one give me a tutorial?
Originally posted by zxh362989 on ROS Answers with karma: 1 on 2014-05-25
Post score: 0
|
I did the following
rostopic echo /topic_name > filename.txt
Then I got a text file the contains the data of the topic...
I want to save the data using matlab code in arrays
for example I have the following text file:
secs: 4113
nsecs: 565000000
frame_id: ''
pose:
position:
x: 5.0
y: 5.0
z: 5.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 0.0
---
header:
seq: 2544
stamp:
secs: 4113
nsecs: 590000000
frame_id: ''
pose:
position:
x: 5.0
y: 5.0
z: 5.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 0.0
---
I want to save the pose position.x = [ 5 2 ..... ]
same as for y and z.. I want to save the data in arrays using matlab ?
Originally posted by RSA_kustar on ROS Answers with karma: 275 on 2014-05-25
Post score: 0
|
Hey guys,
I got a couple of .dae files of my robot in fairly high detail. After creating the .urdf with all the joint limits and stuff, I have created a MoveIt-package with those files and can now see the robot and plan in Rviz.
Since my dae files are fairly high detail, I wanted to know if there is a "ROS"-way of decreasing the polynom count in a safe way, so that MoveIt can check faster for collisions.
I didn't create the dae files and don't know much about this format, but my guess would be if I just open them up in a program like blender or something and save them again with less detail, there is no way to guarantee that I didn't accidently remove parts from the robot, letting MoveIt believe that this certain space is empty? Also, some kind of setting for the error would be nice, like "Create me a model which is at maximum 15 mm away from the actual robot"
Is there a script/package/program, that does this?
Thanks in advance,
Rabe
Originally posted by Rabe on ROS Answers with karma: 683 on 2014-05-26
Post score: 2
|
I'm trying to make use of the an ApproximateTime synchronization, however I am having a boost library related issue with that. Then I decided to test the actual ros tutorial here.
Here is the source file,
/*test_message_filter.cpp*/
#include <message_filters/subscriber.h>
#include <message_filters/synchronizer.h>
#include <message_filters/sync_policies/approximate_time.h>
#include <sensor_msgs/Image.h>
#include <boost/bind.hpp>
using namespace sensor_msgs;
using namespace message_filters;
void callback(const ImageConstPtr& image1, const ImageConstPtr& image2)
{
// Solve all of perception here...
ROS_INFO_STREAM("[test_messge_filter] started " << "\n");
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "test_messge_filter");
ros::NodeHandle nh;
message_filters::Subscriber<Image> image1_sub(nh, "image1", 1);
message_filters::Subscriber<Image> image2_sub(nh, "image2", 1);
typedef sync_policies::ApproximateTime<Image, Image> MySyncPolicy;
// ApproximateTime takes a queue size as its constructor argument, hence MySyncPolicy(10)
Synchronizer<MySyncPolicy> sync(MySyncPolicy(10), image1_sub, image2_sub);
sync.registerCallback(boost::bind(&callback, _1, _2));
ros::spin();
return 0;
}
I have the following in the CMakeLists.txt file
rosbuild_add_executable(message_fitler_cpp src/test_message_filter.cpp)
#rosbuild_add_boost_directories()
#rosbuild_link_boost(message_fitler_cpp bind)
The error that I'm getting after a rosmake looks like this
Linking CXX executable ../bin/message_fitler_cpp
/usr/bin/ld: CMakeFiles/message_fitler_cpp.dir/src/test_message_filter.o: undefined reference to symbol 'boost::signals::connection::~connection()'
/usr/bin/ld: note: 'boost::signals::connection::~connection()' is defined in DSO /usr/lib/libboost_signals.so.1.46.1 so try adding it to the linker command line
/usr/lib/libboost_signals.so.1.46.1: could not read symbols: Invalid operation
collect2: ld returned 1 exit status
make[3]: *** [../bin/message_fitler_cpp] Error 1
make[3]: Leaving directory `/home/hash/fuerte_workspace/sandbox/ros_test/build'
make[2]: *** [CMakeFiles/message_fitler_cpp.dir/all] Error 2
make[2]: Leaving directory `/home/hash/fuerte_workspace/sandbox/ros_test/build'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/hash/fuerte_workspace/sandbox/ros_test/build'
Originally posted by anonymous3751 on ROS Answers with karma: 31 on 2014-05-26
Post score: 0
|
Debug symbols information is missing from my node built from cpp file with catkin_make in hydro. I need it to debug the node with the gdb. How to turn on generating the debug info?
Originally posted by sd on ROS Answers with karma: 21 on 2014-05-26
Post score: 1
|
Is there a way to load a .yaml file in python directly to the parameter server? I want to do something like "rosparam load", but programmatically, as a callback for an event in a GUI.
Thanks in advance
Originally posted by crpizarr on ROS Answers with karma: 229 on 2014-05-26
Post score: 3
Original comments
Comment by lucasw on 2015-05-26:
This is duplicate of http://answers.ros.org/question/58819/programatically-load-yaml-config-file-to-the-parameter-server/, there is an answer there.
|
Hey folks,
do I actually need a node to translate a boolean topic into two others?
I've got subscribed topic /input and two published topics /out1 and /out2. All message are boolean, /out1 shall republish /input and /out2 shall republish NOT /input. So if there's a True coming in on /input, /out1 shall publish True and /out2 shall publish False.
Is there a way to achieve that without writing a node? Some sort of remapping or something?
Thank you, guys!
Cheers,
Hendrik
Originally posted by Hendrik Wiese on ROS Answers with karma: 1145 on 2014-05-26
Post score: 1
|
I am new to ROS. I am trying to write differential drivers . Where can I find documentation ?
Originally posted by Jackel Fox on ROS Answers with karma: 1 on 2014-05-26
Post score: 0
|
I'm debugging my node written in cpp. My code calls ControllerManager::update from controller_manager package. But the gdb doesn't step into the function.
I cloned https://github.com/ros-controls/ros_control.git and rebuilt it. My node is linked against libcontroller_manager.so built with the controller_manager package.
Probably roslaunch loads the version of the libcontroller_manager.so from /opt/ros/hydro/lib instead of the one which was built with the controller_manager package.
So the question is how to tell roslaunch to use the shared libraries from my catkin work space lib directory before the default ros libraries.
Originally posted by sd on ROS Answers with karma: 21 on 2014-05-26
Post score: 1
|
Hi all!
upon
rosinstall_generator turtlebot_gazebo --rosdistro hydro --deps --wet-only --tar > turtlebot.list
wstool merge -t src turtlebot.list
wstool update -t src
./src/catkin/bin/catkin_make_isolated --install
I end up with multiple "undefined symbol" errors. https://gist.github.com/anaderi/a321476446aa7f828635
These message are result of the following command: https://gist.github.com/anaderi/a321476446aa7f828635#file-link-cmd
Has anyone managed to build this package? (it is one of dependencies for turtlebot simulator (http://answers.ros.org/question/158641/turtlebot-navigation-on-hydro-osx/), which I'm actually interested in most of all)
(Sorry for not workable links, but is seems like a workaround for my lack of karma)
Originally posted by anaderi on ROS Answers with karma: 11 on 2014-05-26
Post score: 1
|
When catkin_make is carried out twice in a row,
I believe that the second catkin_make finishes instantaneously if the first one is done successfully.
However, I have the following situations.
do catkin_make and it's done successfully
do catkin_make in a row(nothing is edited.)
"make" is started from the beginning(it's strange) and takes a lot of time.
do catkin_make and have the same situation as "3".
What's the matter ??
Thanks in advance.
Configuration :
OS : Ubuntu 12.04LTS
ROS : Hydro
Packages in catkin workspace :
some original packages,
navigation metapackage(amcl, move_base and so on),
kobuki(kobuki-hydro, kobuki_core-hydro kobuki_msgs),
turtlebot(turtlebot_apps, turtlebot_bringup, turtlebot_description, turtlebot_msgs, turtlebot_simulator, turtlebot_viz, yocs_msgs-hydro, yujin_ocs, joystic_drivers, linux_hardware)
Originally posted by moyashi on ROS Answers with karma: 721 on 2014-05-26
Post score: 0
|
Hi all,
When I run nao_driver, either in a simulator or on a nao robot (from my remote PC), I get this error:
Traceback (most recent call last):
File "/home/jonfeme/nao_ws/src/nao_robot/nao_driver/nodes/nao_camera.py", line 156, in
naocam = NaoCam()
File "/home/jonfeme/nao_ws/src/nao_robot/nao_driver/nodes/nao_camera.py", line 84, in init
if not self.cim.setURL( calibration_file ):
File "/opt/ros/hydro/lib/python2.7/dist-packages/camera_info_manager/camera_info_manager.py", line 376, in setURL
if parseURL(resolveURL(url, self.cname)) >= URL_invalid:
File "/opt/ros/hydro/lib/python2.7/dist-packages/camera_info_manager/camera_info_manager.py", line 514, in resolveURL
dollar = url.find('$', rest)
AttributeError: 'NoneType' object has no attribute 'find'
[nao_camera-8] process has died [pid 4000, exit code 1, cmd /home/jonfeme/nao_ws/src/nao_robot/nao_driver/nodes/nao_camera.py --pip=127.0.0.1 --pport=9559 __name:=nao_camera __log:=/home/jonfeme/.ros/log/cf9cdbf2-e55e-11e3-a9ca-000c29578851/nao_camera-8.log].
log file: /home/jonfeme/.ros/log/cf9cdbf2-e55e-11e3-a9ca-000c29578851/nao_camera-8*.log
I cannot use nao_camera. I installed ros-hydro-info-manager-py, but I have still the same problem. The same happens with nao_speech.py.
I have Naoqi version 1.14.5, the other nodes work fine, but camera and speech not.
I will be grateful for any help.
Thank you very much
Originally posted by JonathanAI on ROS Answers with karma: 41 on 2014-05-26
Post score: 0
Original comments
Comment by Vincent Rabaud on 2014-05-27:
what do you launch from the command line ? Do you set NAO_IP and roslaunch the nao_camera.launch ?
Comment by JonathanAI on 2014-05-27:
Yes. I launch NAO_IP= roslaunch nao_driver nao_driver.launch force_python:=true
I also included the nao_camera node in the nao_driver.launch
All nodes are available, excepting nao_camera, nao_speech and nao_tactile.
Thank you
Comment by Vincent Rabaud on 2015-05-09:
Please update to the latest code that should have handled those issues.
|
Hi,
I am trying to use AMCL for localization.
when instantiating AMCLConfig it refers to dynamic_reconfigure package.
so I added following statements in the CMakelists.txt
cmake_minimum_required(VERSION 2.8.3)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
project(agv)
set(EXECUTABLE_OUTPUT_PATH bin)
find_package(catkin REQUIRED COMPONENTS
message_filters
laser_geometry
roscpp
rospy
geometry_msgs
std_msgs
nav_msgs
sensor_msgs
tf2_msgs
actionlib_msgs
amcl
tf
genmsg
dynamic_reconfigure
message_generation )
set(Boost_USE_MULTITHREADED ON)
find_package(Boost COMPONENTS thread date_time program_options filesystem system REQUIRED)
rosbuild_add_boost_directories()
find_package(catkin REQUIRED tf)
# dynamic reconfigure
generate_dynamic_reconfigure_options(
src/navigation_node/cfg/AMCL.cfg
)
###########
include(FindProtobuf)
find_package(Protobuf REQUIRED)
include_directories(${PROTOBUF_INCLUDE_DIR})
find_package(Eigen REQUIRED)
include_directories(${EIGEN_INCLUDE_DIRS})
add_definitions(${EIGEN_DEFINITIONS})
############
## System dependencies are found with CMake's conventions
find_package(Boost REQUIRED COMPONENTS system thread log log_setup program_options)
find_package( Threads )
add_message_files(
FILES
agv.msg
laser.msg
lasergeo.msg
nav_odo.msg
tf_msg.msg
# Message1.msg
# Message2.msg
)
generate_messages(
DEPENDENCIES
geometry_msgs
sensor_msgs
nav_msgs
std_msgs
tf2_msgs
)
catkin_package(
# INCLUDE_DIRS include ../include ..
# LIBRARIES beginner_tutorials
CATKIN_DEPENDS
roscpp
rospy
std_msgs
geometry_msgs
sensor_msgs
tf2_msgs
nav_msgs
message_runtime
message_filters
tf
#dynamic_reconfigure
DEPENDS system_lib
LIBRARIES laser_geometry
DEPENDS boost Eigen
#INCLUDE_DIRS include
LIBRARIES amcl_sensors amcl_map amcl_pf
)
include_directories(include ${catkin_INCLUDE_DIRS} ../include ../third_party_lib/include)
set(LASER_PATH src/laser_node)
add_executable(Node_laser ${LASER_PATH}/LMS1xx_node.cpp ${LASER_PATH}/LMS1xx.cpp)
add_dependencies(Node_laser laser_generate_cpp)
target_link_libraries(Node_laser pthread boost_filesystem boost_system log4cpp)
target_link_libraries(Node_laser ${catkin_LIBRARIES})
#target_link_libraries (Node_laser /opt/ros/hydro/lib/lms1xx/LMS1xx_node )
target_link_libraries(Node_laser
${Boost_FILESYSTEM_LIBRARY}
${Boost_SYSTEM_LIBRARY}
${PROTOBUF_LIBRARY}
)
add_definitions("-std=c++0x -pthread -llog4cpp")
set(LOCALIZATION_PATH src/localization_node)
add_library(laser_geometry ${LOCALIZATION_PATH}/laser_geometry.cpp)
target_link_libraries(laser_geometry ${Boost_LIBRARIES} ${tf_LIBRARIES})
add_executable(Node_localization ${LOCALIZATION_PATH}/main.cpp)
add_dependencies(Node_localization localization_generate_cpp)
target_link_libraries(Node_localization boost_filesystem boost_system boost_thread log4cpp)
target_link_libraries(Node_localization ${catkin_LIBRARIES})
target_link_libraries(Node_localization laser_geometry)
target_link_libraries(Node_localization
${Boost_FILESYSTEM_LIBRARY}
${Boost_SYSTEM_LIBRARY}
${PROTOBUF_LIBRARY}
)
add_definitions("-std=c++0x -pthread -llog4cpp -eigen3")
add_library(amcl_pf
src/navigation_node/amcl/pf/pf.c
src/navigation_node/amcl/pf/pf_kdtree.c
src/navigation_node/amcl/pf/pf_pdf.c
src/navigation_node/amcl/pf/pf_vector.c
src/navigation_node/amcl/pf/eig3.c
src/navigation_node/amcl/pf/pf_draw.c)
add_library(amcl_map
src/navigation_node/amcl/map/map.c
src/navigation_node/amcl/map/map_cspace.cpp
src/navigation_node/amcl/map/map_range.c
src/navigation_node/amcl/map/map_store.c
src/navigation_node/amcl/map/map_draw.c)
add_library(amcl_sensors
src/navigation_node/amcl/sensors/amcl_sensor.cpp
src/navigation_node/amcl/sensors/amcl_odom.cpp
src/navigation_node/amcl/sensors/amcl_laser.cpp)
target_link_libraries(amcl_sensors amcl_map amcl_pf)
set(NAVI_PATH src/navigation_node)
#include_directories(${catkin_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS})
add_executable(Node_navi ${NAVI_PATH}/amcl_node.cpp)
add_dependencies(Node_navi navi_generate_cpp)
target_link_libraries(Node_navi amcl_sensors amcl_map amcl_pf pthread boost_filesystem boost_system )
target_link_libraries(Node_navi ${catkin_LIBRARIES})
target_link_libraries(Node_navi ${Boost_FILESYSTEM_LIBRARY} ${Boost_SYSTEM_LIBRARY} ${PROTOBUF_LIBRARY} ${Boost_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
add_definitions("-std=c++0x -pthread")
and package dependency in package.xml
But, during LINK, the linker always complains not able to find packaged function `dynamic_reconfigure::init_mutex'
amcl_node.cpp:(.text._ZN4amcl10AMCLConfig15__get_statics__Ev[amcl::AMCLConfig::__get_statics__()]+0x23): undefined reference to `dynamic_reconfigure::__init_mutex__'
am I missing anything here?
Originally posted by dreamcase on ROS Answers with karma: 91 on 2014-05-26
Post score: 0
Original comments
Comment by BennyRe on 2014-05-27:
Please post your complete CMakeLists.txt. This is the easiest way for us to help you.
Comment by dreamcase on 2014-05-27:
here u go. :)
Comment by BennyRe on 2014-05-27:
dynamic_reconfigure is commented out in your catkin_package(). What happens if you remove the #
Comment by dreamcase on 2014-05-29:
doesn't change ...
|
I'm trying to familiarize myself with ROS by doing simple DIY projects with the hardware I have.
The goal is try to get a mobile base to navigate a direct/straight path from point A to point B where Point A&B are marked GPS coordinates (waypoints?). The sensors I have are GPS/AHRS sensors. I looked into the tutorials for ROS navigation for a while before realizing that the stack needs sensor streams from LaserScans or PointClouds.
I dont have any range sensors, and at this point I'm not too worried about obstacle avoidance. I have the GPS/AHRS sensor working independently in ROS and I'm trying to merge this all together with my motor controller. Is there anyway I can use the navigation package without a laser/range sensor? Is there an alternative package that I use to do navigate a direct path from point A to point B? Or, can I somehow use the navigation package with a simple usbcam instead?
Originally posted by eve on ROS Answers with karma: 13 on 2014-05-27
Post score: 0
Original comments
Comment by bharadwaj26 on 2018-08-11:
Hey Is there any link that you can share to connect bebop to GPS in ros and set waypoints
|
Hi,
I am trying to use libdmtx to read a ecc200 flashcode. My code is similar to the example code on the Man page.
I include:
#include <dmtx.h>
And the whole function looks like this:
void readDMC(cv::Mat& image, int timeout){
unsigned char *pxl;
DmtxImage *img;
DmtxDecode *dec;
DmtxRegion *reg;
DmtxMessage *msg;
DmtxTime t;
img = dmtxImageCreate(image.data, image.cols, image.rows, DmtxPack24bppRGB);
//dmtxImageSetProp(img, DmtxPropImageFlip, DmtxFlipY);
assert(img != NULL);
dec = dmtxDecodeCreate(img, 1);
assert(dec != NULL);
t = dmtxTimeAdd(dmtxTimeNow(), timeout);
reg = dmtxRegionFindNext(dec, &t);
if(reg != NULL) {
msg = dmtxDecodeMatrixRegion(dec, reg, DmtxUndefined);
if(msg != NULL) {
dmtxMessageDestroy(&msg);
}
dmtxRegionDestroy(®);
}
dmtxDecodeDestroy(&dec);
dmtxImageDestroy(&img);
free(pxl);
exit(0);
}
My CMakeList.txt:
cmake_minimum_required(VERSION 2.8.3)
project(kuka_vision)
find_package(catkin REQUIRED COMPONENTS
cv_bridge
image_transport
roscpp
rospy
std_msgs
visp_bridge
visp_tracker
visp_auto_tracker
)
find_package(catkin REQUIRED
)
include_directories(
${catkin_INCLUDE_DIRS}
${OpenCV_INCLUDE_DIRS}
${visp_auto_tracker_INCLUDE_DIRS}
${visp_tracker_INCLUDE_DIRS}
${visp_bridge_INCLUDE_DIRS}
${auto_tracker_INCLUDE_DIRS}
${cmd_line_INCLUDE_DIRS}
${libdmtx_INCLUDE_DIRS}
)
target_link_libraries(kuka_vision_node
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
${visp_auto_tracker_LIBRARIES}
${visp_tracker_LIBRARIES}
${visp_bridge_LIBRARIES}
${auto_tracker_LIBRARIES}
${cmd_line_LIBRARIES}
${libdmtx_LIBRARIES}
)
In my opinion it should work, but i get an 'undefined reference' error, looks like this:
[100%] Building CXX object kuka_vision/CMakeFiles/kuka_vision_node.dir/src/kuka_vision_node.cpp.o
Linking CXX executable /home/faps/catkin_ws/devel/lib/kuka_vision/kuka_vision_node
CMakeFiles/kuka_vision_node.dir/src/kuka_vision_node.cpp.o: In function `readDMC(cv::Mat&, int)':
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:66: undefined reference to `dmtxImageCreate'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:69: undefined reference to `dmtxDecodeCreate'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:72: undefined reference to `dmtxTimeNow'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:72: undefined reference to `dmtxTimeAdd'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:74: undefined reference to `dmtxRegionFindNext'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:76: undefined reference to `dmtxDecodeMatrixRegion'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:79: undefined reference to `dmtxMessageDestroy'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:81: undefined reference to `dmtxRegionDestroy'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:84: undefined reference to `dmtxDecodeDestroy'
/home/faps/catkin_ws/src/kuka_vision/src/kuka_vision_node.cpp:85: undefined reference to `dmtxImageDestroy'
collect2: ld returned 1 exit status
make[2]: *** [/home/faps/catkin_ws/devel/lib/kuka_vision/kuka_vision_node] Error 1
make[1]: *** [kuka_vision/CMakeFiles/kuka_vision_node.dir/all] Error 2
make: *** [all] Error 2
Invoking "make" failed
It would help me a lot, if anyone has an idea for this problem.
Thanks a lot in advance!
Originally posted by Sebastian Meister on ROS Answers with karma: 3 on 2014-05-27
Post score: 0
|
I stumbled upon a difficulty, when trying to work with uEye camera. I've added
include "ueye.h"
to work with code:
HIDS hCam = 1;
INT nRet = is_InitCamera (&hCam, NULL);
Yet this is the output when I try to do catkin_make using my CMakeLists (http://pastebin.com/u8f4t2Fc)
CMakeFiles/ros_aruco.dir/src/ros_aruco.cpp.o: In function `main':
ros_aruco.cpp:(.text+0x578): undefined reference to `is_InitCamera'
collect2: ld returned 1 exit status
What should I add to make it work?
Originally posted by delta785 on ROS Answers with karma: 72 on 2014-05-27
Post score: 1
Original comments
Comment by delta785 on 2014-05-28:
Ding dong! Correct answer. Many thanks!
|
Any documentation/instructions available for using Segbot package for navigation?
I am trying to implement autonomous navigation on a Segway based Robot. Currently, the only available sensor for navigation is hokuyo. So far I have been able control the Segway using libsegwayrmp and use hector_slam with the hokuyo node. I am now trying to use the segbot code for navigation . I have couple of questions regarding the same:
Can we use the map generated by hector slam or is gmapping preferred?
What is the purpose of segbot_gui ( question_dialog_plugin) with regard to navigation?
Also to run the navigation stack for segbot : roslaunch segbot_navigation navigation.launch?
Is there anything I am missing?
Thanks in advance
Originally posted by pnambiar on ROS Answers with karma: 120 on 2014-05-27
Post score: 0
|
I'm using roscopter package to get the imu data from my quadcopter.
It worked well but I got error when running it today.
rosrun roscopter imu_transform_publish.py
[ERROR] [WallTime: 1401208503.130663] bad callback: <function imu_callback at 0x18c9578>
Traceback (most recent call last):
File "/opt/ros/hydro/lib/python2.7/dist-packages/rospy/topics.py", line 682, in _invoke_callback
cb(msg)
File "/home/aqua/ros_catkin/src/roscopter/scripts/imu_transform_publish.py", line 75, in imu_callback
q = tf.transformations.quaternion_from_euler(roll,pitch,yaw)
AttributeError: 'module' object has no attribute 'transformations'
import tf
q = tf.transformations.quaternion_from_euler(roll,pitch,yaw)
According to the tf tutorial, tf.transformations.quaternion_from_euler(roll,pitch,yaw) should be correct, so I'm confused where the problem is.
also for the tf module, it locates in my catkin workspace:
Python 2.7.4 (default, Sep 26 2013, 03:20:26)
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tf
>>> print tf
<module 'tf' from '/home/aqua/ros_catkin/devel/lib/python2.7/dist-packages/tf/__init__.pyc'>
Originally posted by lanyusea on ROS Answers with karma: 279 on 2014-05-27
Post score: 1
Original comments
Comment by Jian1994 on 2019-11-25:
Hi, i am new to ros and just met the same problem. Just download a new tf-file from github as you did. But what should i do afterwards? Would you please explain more about integrating this file into catkin_ws? I use catkin_make rather than rosbuild
Comment by lanyusea on 2019-11-25:
@Jian1994 just put it into your catkin workspace
|
Is there a way to read the last message of a topic wihout using a callback? I mean, the tutorials teach to read a topic assigning a callback in a node, so that whenever a topic updates, the callback is executed. I want to read a topic when the user clicks a button, just once, something like this:
read_topic("/my_topic")
that returns the current data in the topic.
Thanks
Originally posted by crpizarr on ROS Answers with karma: 229 on 2014-05-27
Post score: 5
|
Hi guys,
another pretty short question. For the local planner I'd like to combine a static_map (a pgm file) with obstacle data received from a laser scanner. Both information should be accumulated into a local costmap. Is it enough to set the static_map parameter of the local costmap to true? Or does that mean that no dynamic obstacles are detected and avoided by the local planner anymore? Or will it take more than that?
Reason is hat some of the static obstacles are out of the range of my scanner (i.e. below its horizontal detection arc). They are there, I know where they are, they stay there and won't move. Definition of static. All I need the laser scanner for are moving obstacles that I certainly cannot add to the static map.
Thanks a lot!
Cheers,
Hendrik
Originally posted by Hendrik Wiese on ROS Answers with karma: 1145 on 2014-05-27
Post score: 0
|
According to REP 3, ROS Hydro was meant to support Boost 1.48. Everytime I try to install the libboost-1.48-dev package, however, the package manager tells me it will remove all the ros-hydro packages.
Why does that happen and how can I use both simultaneously, if at all possible? I need to upgrade in order to compile the PCL latest trunk from source. Thanks!
Originally posted by georgebrindeiro on ROS Answers with karma: 1264 on 2014-05-27
Post score: 0
|
I follow the tutorial " slam_gmapping Tutorials MappingFromLoggedData " ,
and when I enter this command " rosrun map_server map saver " , I have this message " waiting for the map "
and after there are nothing.
What is the problem, I had forgot a step, or any configuration?
Originally posted by guigui on ROS Answers with karma: 33 on 2014-05-27
Post score: 0
Original comments
Comment by Rizwan on 2014-05-29:
same problem
|
In a few days May will end. So, on which of the four remaining days will be the final release ROS Indigo released?
Originally posted by fhurlbrink on ROS Answers with karma: 25 on 2014-05-27
Post score: 0
|
Hey guys,
I am writing a controller for a robot. I managed to get it running with MoveIt and Rviz, I have a JointStatePublisher running and can plan trajectories. Most of the times the robot moves along the desired path ;)
In the "Follow_Joint_Trajectory" topic, I get my goals and can post my feedback. Now I was wondering, what exactly happens to the feedback? The messages has arrays for everything like current position, desired position and error.
When I'm putting random high values in the error array, I was expecting MoveIt to cancel the movement? Or replan it?
So far, I only get preempt-requests, when my robot moves too slow and exceeds the timeframe given from MoveIt. Are the feedback values from my robot used for anything or is it just a placeholder? Another option might be, that my statepublisher "overrides" the feedback from my controller? Since the data from the statepublisher agrees with the planned trajectory, MoveIt could ignore the "wrong" feedback?
Thanks in advance,
Rabe
Originally posted by Rabe on ROS Answers with karma: 683 on 2014-05-27
Post score: 1
|
Hi
I downloaded and was able to compile tum_ardrone on Ubuntu 12.04. I'm using ROS Hydro.
As the README.txt explain for Autopilot section we should do the following:
type command "autoInit 500 800" in top-left text-field
click Clear and Send (maybe click Reset first)
=> drone will takeoff & init PTAM, then hold position.
But when I do the same, the drone don't take off. If I click takeoff and then clear and send "autoInit 500 800" command, the drone try to go to the target but can't work properly and don't hold the position. How can I fix this problem? Any idea?
I wonder if I should calibrate the camera and initialize PTAM by myself before sending the commands. Is it necessary?
How much the structure of environment influence the accuracy of the algorithm? Which objects is better to be in the field of view of the camera? Can the AR.Drone work properly in small scales and bounded environments?
Originally posted by hsoltani on ROS Answers with karma: 70 on 2014-05-28
Post score: 0
|
Hey
I'm using an std_msgs/String topic to send some messages between nodes. During testing, I tried to send the string "123"
via rostopic pub:
rostopic pub /topic_name std_msgs/String "123"
My rostopic echo received nothing, and I got the msg:
[WARN] [WallTime: 1401278233.697819] Inbound TCP/IP connection failed: field data must be of type str
So i tried to do:
rostopic pub /topic_name std_msgs/String "123a"
And the msg was sent perfectly.
Is there any reason for this behaviour? "123" is a valid string so there should be no reason to throw this error.
Originally posted by NEngelhard on ROS Answers with karma: 3519 on 2014-05-28
Post score: 3
|
Hi,
Is there a way I can store data in my array (inside a c++ program) as rosbag? Just like I can store it as txt or csv file. I know we can store data being published over a ros topic as rosbag but I am wondering if I can do that without publishing the data on rostopic.
regards
Originally posted by Latif Anjum on ROS Answers with karma: 79 on 2014-05-28
Post score: 0
|
After updating to Ubuntu 14.04 and Indigo I'm not able to execute the camera_calibration.
With last version of opencv libraries (github repository master) python keeps looking for "cv.so". I have no issues with Hydro.
Also when using the repository installation of opencv (2.4.9) camera_calibration gives me this error:
Exception in thread Thread-5:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/opt/ros/indigo/lib/camera_calibration/cameracalibrator.py", line 68, in run
self.function(m)
File "/opt/ros/indigo/lib/camera_calibration/cameracalibrator.py", line 138, in handle_monocular
drawable = self.c.handle_msg(msg)
File "/opt/ros/indigo/lib/python2.7/dist-packages/camera_calibration/calibrator.py", line 749, in handle_msg
gray = self.mkgray(msg)
File "/opt/ros/indigo/lib/python2.7/dist-packages/camera_calibration/calibrator.py", line 262, in mkgray
mono16 = self.br.imgmsg_to_cv(msg, "mono16")
AttributeError: CvBridge instance has no attribute 'imgmsg_to_cv'
I am trying to calibrate an Argos 3D Tof camera. Everything works in Hydro. Even when I try to execute:
rosrun camera_calibration cameracalibrator.py
I do not get any answer. It gets frozen.
I have not seem any bug reported with the same problem. Is someone else having this issue?
Originally posted by amerino on ROS Answers with karma: 13 on 2014-05-28
Post score: 0
Original comments
Comment by DrBot on 2014-07-04:
I see the following on indigo ubuntu 14.04 AMD 64:
[ERROR] [WallTime: 1404511515.772882] bad callback: <bound method FaceDetector.image_callback of <main.FaceDetector object at 0x7f8665897e10>>
Traceback (most recent call last):
File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/topics.py", line 688, in _invoke_callback
cb(msg)
File "/home/alan/catkin_ws/src/rbx1/rbx1_vision/src/rbx1_vision/ros2opencv2.py", line 140, in image_callback
frame = self.convert_image(data)
File "/home/alan/catkin_ws/src/rbx1/rbx1_vision/src/rbx1_vision/ros2opencv2.py", line 281, in convert_image
cv_image = self.bridge.imgmsg_to_cv(ros_image, "bgr8")
AttributeError: CvBridge instance has no attribute 'imgmsg_to_cv'
In hydo I installed ros-hydro-opencv2 and ros-hydro-vision-opencv but opencv2 was not found in indigo.
I am wondering if the opencv2 package needs to be built for indigo or installed from git?
|
I installed rgbdslam and octomap. Have been trying to run rgbdslam with octomap. Eventual goal is to get the MarkerArray /occupied_cells_vis_array as it should. Some things I've done:
METHOD 1
Tried to run kinect+rgbdslam.launch and octomap_server.launch.
Open rviz and nothing is received on the /octomap_point_cloud_array, /occupied_cells_vis_array, /rgbdslam/aggregate_clouds OR /rgbdslam/batch_clouds. The only PointCloud2 topics that displays an output are /camera/depth_registered/points.
METHOD 2
Tried to run rgbdslam_octomap.launch. Worked as well, but due to the color_octomap_server_node not found under octomap_server package (I believe there are some posts saying that you need the experimental version of octomap and some posts saying that the octomap_server is by default colored) I edited it so that it reads octomap_server_node.
Runs as well, also gives the same problem as above.
METHOD 3
While invoking the above scenario, I ran the following commands:
Runningroswtf
gives that node subscription is unconnected:
*/rgbdslam:
*/cloud_in
Running rosrun tf view_frames
gives camera_link > /camera_rgb_frame > /camera_rgb_optical_frame and camera_link > /camera_depth_frame > /camera_depth_optical_frame. Two branches only.
Runningrostopic_list
The expect topics are listed:
/cloud_in
/octomap_point_cloud_array
/occupied_cells_vis_array
/rgbdslam/batch_clouds
/rgbdslam/aggregate_clouds
/tf
Running rosservice call /rgbdslam/ros_ui send_all did not change the above results.
I think it is a mapping issue and that the point clouds are not transformed to the correct topics. I am not sure which point cloud problem it is. Please give some pointers as how I can solve this problem.
EDIT 05/28/14 +1h: The issue lies in the transform. /map is not present in my frames.pdf view. Not sure how to solve this problem either.
EDIT 06/04/14: Solved building octomap using rgbdslam data by recording a .bag file, launching octomap_server, and viewing it in RViz. No fix for continuous rgbdslam mapping and trajectory output found yet.
rgbdslam_octomap.launch
<launch>
<env name="ROSCONSOLE_CONFIG_FILE" value="$(find rgbdslam)/log.conf"/>
<!--might only work with the experimental octomap (as of May 11)-->
<include file="$(find openni_launch)/launch/openni.launch"/>
<node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="false" output="log" >
<param name="config/topic_image_mono" value="/camera/rgb/image_color"/>
<param name="config/topic_points" value="/camera/rgb/points"/> <!--if empty, poincloud will be reconstructed from image and depth -->
<param name="config/wide_topic" value=""/>;
<param name="config/wide_cloud_topic" value=""/>;
<param name="config/drop_async_frames" value="true"/> <!-- Check association of depth and visual image, reject if not in sync -->
<param name="config/feature_detector_type" value="SIFTGPU"/><!-- If SIFTGPU is enabled in CMakeLists.txt, use SURF here -->
<param name="config/feature_extractor_type" value="SIFTGPU"/><!-- If SIFTGPU is enabled in CMakeLists.txt, use SURF here -->
<param name="config/matcher_type" value="FLANN"/> <!-- FLANN (not avail for ORB features), SIFTGPU (only for SIFTGPU detector) or BRUTEFORCE-->
<param name="config/max_keypoints" value="700"/><!-- Extract no more than this many keypoints (not honored by SIFTGPU)-->
<param name="config/min_keypoints" value="300"/><!-- Extract no less than this many ... -->
<param name="config/nn_distance_ratio" value="0.6"/> <!-- Feature correspondence is valid if distance to nearest neighbour is smaller than this parameter times the distance to the 2nd neighbour -->
<param name="config/optimizer_skip_step" value="1"/><!-- optimize every n-th frame -->
<param name="config/optimizer_iterations" value="4"/><!-- optimize every n-th frame -->
<param name="config/store_pointclouds" value="true"/> <!-- if, e.g., only trajectory is required, setting this to false saves lots of memory -->
<param name="config/backend_solver" value="pcg"/>
<param name="config/individual_cloud_out_topic" value="/rgbdslam/batch_clouds"/>;
<param name="config/visualization_skip_step" value="1"/> <!-- draw only every nth pointcloud row and line, high values require higher squared_meshing_threshold -->
<param name="config/send_clouds_rate" value="2"/> <!-- When sending the point clouds (e.g. to RVIZ or Octomap Server) limit sending to this many clouds per second -->
<param name="config/min_time_reported" value="0.01"/><!-- for easy runtime analysis -->
<param name="config/min_translation_meter" value="0.05"/><!-- frames with motion less than this, will be omitted -->
<param name="config/min_rotation_degree" value="1"/><!-- frames with motion less than this, will be omitted -->
<param name="config/predecessor_candidates" value="5"/><!-- search through this many immediate predecessor nodes for corrspondences -->
<param name="config/neighbor_candidates" value="5"/><!-- search through this many graph neighbour nodes for corrspondences -->
<param name="config/min_sampled_candidates" value="5"/><!-- search through this many uniformly sampled nodes for corrspondences -->
</node>
<!-- Launch octomap_server for mappingL: Listens to incoming PointCloud2 data
and incrementally build an octomap. The data is sent out in different representations. -->
<node pkg="octomap_server" type="octomap_server_node" name="octomap_server" output="screen">
<param name="resolution" value="0.005" />
<!-- fixed map frame (set to 'map' if SLAM or localization running!) -->
<param name="frame_id" type="string" value="map" />
<!-- maximum range to integrate (speedup, accuracy) -->
<param name="max_sensor_range" value="6.0" />
<!-- Save octomap here on destruction of the server -->
<param name="save_directory" value="$(optenv OCTOMAP_SAVE_DIR ./)" />
<!-- data source to integrate (PointCloud2) -->
<remap from="cloud_in" to="/rgbdslam/batch_clouds" />
</node>
</launch>
Originally posted by xuningy on ROS Answers with karma: 101 on 2014-05-28
Post score: 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.