instruction
stringlengths
40
28.9k
I am trying to install object_manipulation package for ros hydro (which I installed from source) on ubuntu 13.10. To do that I downloaded the source of package from github link and tried to run following command in the downloaded folder: sudo /opt/ros/hydro/bin/catkin_make_isolated --install --install-space /opt/ros/hydro However I am getting following error: Traceback (most recent call last): File "/opt/ros/hydro/bin/catkin_make_isolated", line 12, in from catkin.builder import build_workspace_isolated ImportError: No module named catkin.builder Originally posted by Alice on ROS Answers with karma: 16 on 2014-03-06 Post score: 0
Hi all, I'm currently working on a camera driver and I was wondering if any of the ideas I list here are even possible: Group some dynamic_reconfigure parameters into groups or "boxes" so that the end user can see that those parameters are related. Ex: Acquisition params, Trigger params, White balance params... Add tooltips or a info box whose content changes when the mouse hovers a particular parameter in the dynamic reconfigure gui. And I've seen in the tutorials that it's possible to have more than one config file in one ROS package. If the package contains more than one node, then the relationship is easy: one cfg file for each, but could it be possible to have more than one cfg file in the same node? Therefore, could I "group" the parameters I want in different cfg files and show them as a tree in the dynamic reconfigure gui? I tried groups (even with prosilica_camera example) in dynamic_reconfigure but whenever I open the GUI to see the params, reconfigure_gui crashes: $ rosrun rqt_reconfigure rqt_reconfigure [INFO] [WallTime: 1394202876.542782] reconf loading #1/4 0.0 / 0.0sec node=/camera/image_raw/compressed [INFO] [WallTime: 1394202876.546365] reconf loading #2/4 0.0 / 0.0sec node=/camera/image_raw/compressedDepth [INFO] [WallTime: 1394202876.547479] reconf loading #3/4 0.0 / 0.0sec node=/camera/image_raw/theora [INFO] [WallTime: 1394202876.551956] reconf loading #4/4 0.0 / 0.01sec node=/prosilica_driver Traceback (most recent call last): File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_reconfigure/node_selector_widget.py", line 248, in _selection_changed_slot self._selection_selected(index_current, rosnode_name_selected) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_reconfigure/node_selector_widget.py", line 198, in _selection_selected item_widget = item_child.get_dynreconf_widget() File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_reconfigure/treenode_qstditem.py", line 148, in get_dynreconf_widget self._param_name_raw) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_reconfigure/dynreconf_client_widget.py", line 57, in __init__ group_desc, node_name) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_reconfigure/param_groups.py", line 152, in __init__ self._create_node_widgets(config) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_reconfigure/param_groups.py", line 198, in _create_node_widgets widget = eval(_GROUP_TYPES[group['type']])(self.updater, group) File "/opt/ros/hydro/lib/python2.7/dist-packages/rqt_reconfigure/param_groups.py", line 247, in __init__ super(BoxGroup, self).__init__(updater, config) TypeError: __init__() takes exactly 4 arguments (3 given) Originally posted by Miquel Massot on ROS Answers with karma: 1471 on 2014-03-06 Post score: 0 Original comments Comment by demmeln on 2014-03-07: the tooltip / info box idea seems useful enough. Maybe you could open an issue on github for future reference. Comment by Miquel Massot on 2014-03-10: Done. https://github.com/ros-visualization/rqt_common_plugins/issues/216
Hi, I have been using the individualMarkers node from the ar_track_alvar package successfully. With the individualMarkersNoKinect however, the detected markers are about 5 cm 'below' the right location. Apparently this depth data based improvement is important: if I comment out the int ret = PlaneFitPoseImprovement(i, m->ros_corners_3D, selected_points, cloud, m->pose); line in the individualMarkers code, I get a similarly large error. Is that really the best accuracy you can get without using the depth data? I am using a close range primesense camera with markers not more than 1 meter away from the camera. The marker size is 7 cm. When using the individualMarkersNoKinect node, I use the following topics: /camera/rgb/image_raw /camera/rgb/camera_info Thanks for any feedback, Bert Originally posted by bwillaert on ROS Answers with karma: 11 on 2014-03-06 Post score: 1 Original comments Comment by bwillaert on 2014-03-11: In the meantime, I did a calibration of the rgd camera (using http://wiki.ros.org/camera_calibration) and that improved the 2d based tracking accuracy. The errors are now < 2 cm in the direction from the camera to the marker location. Depending on the camera quality and the calibration quality, I assume now that's the best possible...
i am using hector_slam with laser data of kinect but when i move kinect to create map the map jumps, and does not create a compleate map. tnx for your seggestions. Originally posted by parhamso on ROS Answers with karma: 1 on 2014-03-06 Post score: 0
Hello, I'm trying to build ROS from source. The system I am using is: Ubuntu 13.10 ROS groovy desktop-full I used this guide to build the system: [I have no karma to post the link] Ubuntu is a fresh install from disk. Compiling went well, except that the dependency on the libyaml parser was not solved, I installed those packages and it worked, however when trying to build the collada_parser packages, I get the following error: kempenaarjj@ce011:~/ros/catkin_ws$ cd /home/kempenaarjj/ros/catkin_ws/build_isolated/collada_parser && /home/kempenaarjj/ros/catkin_ws/install_isolated/env.sh make -j2 -l2 [100%] Building CXX object CMakeFiles/collada_parser.dir/src/collada_parser.cpp.o /home/kempenaarjj/ros/catkin_ws/src/robot_model/collada_parser/src/collada_parser.cpp:45:17: fatal error: dae.h: No such file or directory #include <dae.h> ^ compilation terminated. make[2]: *** [CMakeFiles/collada_parser.dir/src/collada_parser.cpp.o] Error 1 make[1]: *** [CMakeFiles/collada_parser.dir/all] Error 2 make: *** [all] Error 2 The collada packages I got installed on the system(either via the rosdep dependency command or by manually using apt-get: kempenaarjj@ce011:~/ros/catkin_ws/build_isolated/collada_parser$ dpkg --get-selections | grep -v deinstall | grep coll collada-dom-dev install collada-dom2.4-dp install collada-dom2.4-dp-base install collada-dom2.4-dp-dev install Also a dae.h is present on the system: kempenaarjj@ce011:/$ find -name dae.h ./usr/include/collada-dom2.4/dae.h I already found that there were tickets regarding this in the bug tracker, however they do not state a fix. Which dependency do I need to install here in order to make this package work? Regards, Jan Jaap Originally posted by JanJaap on ROS Answers with karma: 11 on 2014-03-06 Post score: 1 Original comments Comment by po1 on 2014-04-28: I have a similar problem on OSX 10.8. dae.h is installed by homebrew in /usr/local/include/collada-dom2.4 [EDIT] never mind, it looks like I just add to re-run CMake for it to find the collada-dom-config.cmake
Hi all, I am digging deep in this forum, but I'm getting lost. I am using ROS Groovy and receiving a sensor_msgs::PointCloud2 from a depth_image_proc nodelet and I want to process it using PCL 1.7. None of the solutions found in this forum are working for me, i.e., I miss some function prototype of toPCL, fromROSMsg, etc. functions. I would like to do something like: void CloudViewerPlugin::pointcloudCallback(const sensor_msgs::PointCloud2::ConstPtr& msg) { pcl::PointCloud<pcl::PointXYZ> cloud; pcl::PCLPointCloud2 pcl_pc; pcl_conversions::toPCL(*msg, pcl_pc); pcl::fromPCLPointCloud2(pcl_pc, cloud); or void CloudViewerPlugin::pointcloudCallback(const sensor_msgs::PointCloud2::ConstPtr& msg) { pcl::PointCloud<pcl::PointXYZ> cloud; pcl::fromROSMsg(*msg, cloud); but I found no function prototype to do this. Originally posted by madmage on ROS Answers with karma: 293 on 2014-03-07 Post score: 10 Original comments Comment by dornhege on 2014-03-07: Can you try to directly subscribe to a pcl pointcloud? There might be some pcl_ros magic going on that allows you to do so. Comment by madmage on 2014-03-07: Hi Christian, I already tried that solution, but: /opt/ros/groovy/include/ros/message_traits.h:121: error: ‘__s_getMD5Sum’ is not a member of ‘pcl::PointCloudpcl::PointXYZ’ on the subscribe line (I'm following http://wiki.ros.org/pcl_ros) Comment by madmage on 2014-03-07: Sorry, I hadn't #included the right files, however, now there are two errors: /opt/ros/groovy/include/pcl_ros/point_cloud.h:176: error: no matching function for call to ‘createMapping(std::vector<sensor_msgs::PointField_std::allocator<void > >&, pcl::MsgFieldMap&)’ and: /opt/ros/groovy/include/ros/serialization.h:134: error: ‘struct pcl::PCLHeader’ has no member named ‘deserialize’ Comment by Rufus on 2020-04-12: I believe your second option fromRosMsg is only available fron Kinetic onwards
can we choose package to be tested? currently catkin_make test tests all test code under src directory, but I'd like to check only specific packages, in order to save time. Originally posted by Kei Okada on ROS Answers with karma: 1186 on 2014-03-07 Post score: 2
Greetings! I am soon starting up my new hobby project: a SLAM-robot with no clear goal yet (part from navigation). All the parts have arrived but I need some advice regarding what stack to start with, as I do not want to assemble all software components from scratch. I will start of by presenting the list of hardware. CPU-board: Hardkernel ODROID-U3 Controller board: Arduino plug-in board for ODROID-U3 Motors: Two pretty fat stepper motors Stepper motor drivers: Two L298N based stepper motor driver boards Sensor: Microsoft Kinect The robot will be assembled on a frame built of sheet aluminium together with a 360º turning nosewheel and airplane wheels for the stepper motors. The Kinect sensor will be mounted on the top of the aluframe. Now there are several issues that I need to address and I hope that the ROS community can help me with some of them or at leaste provide som useful suggestions. The ODROID officially supports releases of ubuntu 13.10 and up. As I understand the latest Ubuntu officially supported by ROS is 13.04. Obviously there is a gap here and I have the options of installing a non-supported OS on the ODROID, or compiling ROS from source - I am not very impressed by any of those two solutions. Another option is to run Android but I have a bad feeling about that since it seems very experimental. I do not want to build my application from scratch as this is a waste of time imho. What I need to figure out is which pre-assembled stack to use in order to get as close as possible to my particular setup. I'd be happy if the only thing I'd actually need to implement in code is the stepper motor interface on the arduino. What is the recommended workflow for developing the robot application? SSH in and have the build environment located on the ODROID? Cross compile on a PC? Finally, I am interested in some cool applications for the robot. I.e. vacuum cleaner, spy etc. Let's get this discussion started! /Simon Originally posted by aerkenemesis on ROS Answers with karma: 21 on 2014-03-07 Post score: 2 Original comments Comment by demmeln on 2014-03-07: Please don't open duplicates.
Hello everyone, I am working on Groovy and have all the libraries installed. I have created a custom .msg file with following entries : Header header pcl_msgs/PolygonMesh mesh sensor_msgs/PointCloud2 normals I have included pcl_msgs in my package.xml and CMakelists.txt also. But unfortunately on compilation I get following errors -- checking for module 'openni-dev' -- package 'openni-dev' not found -- Could NOT find openni (missing: OPENNI_INCLUDE_DIRS) ** WARNING ** io features related to openni will be disabled -- checking for module 'openni-dev' -- package 'openni-dev' not found -- Could NOT find openni (missing: OPENNI_INCLUDE_DIRS) ** WARNING ** visualization features related to openni will be disabled -- looking for PCL_COMMON -- looking for PCL_KDTREE -- looking for PCL_OCTREE -- looking for PCL_SEARCH -- looking for PCL_SAMPLE_CONSENSUS -- looking for PCL_IO -- looking for PCL_GEOMETRY -- looking for PCL_FEATURES -- looking for PCL_FILTERS -- looking for PCL_KEYPOINTS -- looking for PCL_SURFACE -- looking for PCL_REGISTRATION -- looking for PCL_SEGMENTATION -- looking for PCL_VISUALIZATION -- looking for PCL_TRACKING CMake Error at /opt/ros/groovy/share/genmsg/cmake/genmsg-extras.cmake:252 (message): Could not find 'share/pcl_msgs/cmake/pcl_msgs-msg-paths.cmake' (searched in Call Stack (most recent call first): modsrc/CMakeLists.txt:31 (generate_messages) -- Configuring incomplete, errors occurred! Invoking "cmake" failed CMakelist: cmake_minimum_required(VERSION 2.8.3) project(mod_msgs) find_package(catkin REQUIRED COMPONENTS nav_msgs roscpp rospy sensor_msgs std_msgs message_generation pcl pcl_ros pcl_msgs ) add_message_files( FILES TriangleMesh.msg ) generate_messages( DEPENDENCIES nav_msgs sensor_msgs std_msgs pcl pcl_ros pcl_msgs ) catkin_package( CATKIN_DEPENDS nav_msgs roscpp rospy sensor_msgs std_msgs message_runtime pcl_msgs pcl pcl_ros ) include_directories( ${PCL_INCLUDE_DIRS} ${catkin_INCLUDE_DIRS} ) link_directories(${PCL_LIBRARY_DIRS}) add_definitions(${PCL_DEFINITIONS}) I have looked through pcl forums but not able to resolve this problem. Any kind of guidance would be much appreciated. Thanks Originally posted by rosAS on ROS Answers with karma: 21 on 2014-03-07 Post score: 0
is there way to get list of the depend package as we used rospack depend, for catkin? k-okada@kokada-t430s:/opt/ros/hydro/share/roslang$ rospack depends roslanag [rospack] Error: no such package roslanag k-okada@kokada-t430s:/opt/ros/hydro/share/roslang$ rospack depends-on roslanag [rospack] Warning: no such package roslanag UPDATE: Original question is invalid, rospack depend should work. $ rospack depends roslang catkin genmsg Originally posted by Kei Okada on ROS Answers with karma: 1186 on 2014-03-07 Post score: 1
What are implications of catkin_add_env_hooks and Windows? Do they just get ignored, with the intent that they're mainly there to provide window-dressing stuff like tab completion? Originally posted by mikepurvis on ROS Answers with karma: 1153 on 2014-03-07 Post score: 1
Hi,everyone!when I was setting up my robot for navigation ,I followed the navigation tutorial like this .But when I roslaunch the move_base.launch ,I ran into an error.It says , You must specify at least three points for the robot footprint,reverting to previous footprint. I know maybe something is wrong with costmap_common_params.yaml setting "footprint:[[x0,y0],[x1,y1]...[xn,yn]]".But I donot know how to solve the problem.Can anyone help me ??? I add the costmap_common_params.yaml: obstacle_range: 2.5 raytrace_range: 3.0 footprint: [[x0, y0], [x1, y1], ... [xn, yn]] inflation_radius: 0.55 observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: base_laser, data_type: LaserScan, topic: scan, marking: true, clearing: true} Originally posted by Yuichi Chu on ROS Answers with karma: 148 on 2014-03-07 Post score: 4 Original comments Comment by ahendrix on 2014-03-07: Can you add your costmap_common_params.yaml to the question? Comment by Yuichi Chu on 2014-03-10: Ok,I add my costmap_common_params.yaml to the question. Maybe I should fill in x0,y0,x1....xn exactly. But I donot know what footprint and inflation_radius means in this file and how much it matters to the algorithm. I have read the tutorial,but canot catch the point .Can you help me?
Is Latching handled by the Master or Node? Thanks, Aaron Originally posted by unknown_entity1 on ROS Answers with karma: 104 on 2014-03-07 Post score: 0
Is it possible to install an overlay in the same install space as an underlay? For example, if I have a base install from source in /opt/ros/hydro, it might be convenient to extend it with additional packages in an overlayed workspace instead of augmenting the original workspace (which for the desktop-full install has about 250 packages on hydro). If those additional packages are merely dependencies (i.e. I don't intend to modify them) and not specific to one project, installing them on top of the underlay in /opt/ros/hydro would be convenient. Originally posted by demmeln on ROS Answers with karma: 4306 on 2014-03-07 Post score: 0
Hello everyone, I am receiving a pointCloud2 type messages by subscribing to a topic, in my code. Now I need to build an octomap from this. I am currently using Groovy and Octomap libraries for ROS Groovy. I have gone through the documentation of octomap, but couldn't retrieve much from that. The function octomap::insertPointCloud() is somewhat close to my requirement. Is there a way to create an octomap using this API for the callback being received ? And can I eventually write an octomap-file to the disk using some function ? I have gone through few threads like this also but not able to get a concrete idea. http:// answers.ros.org/question/89906/how-to-generate-an-octomap-from-a-point-cloud-with-hydro/ I'd be grateful if someone can point me to an example code for the same or the order we need to follow to achieve this ? Thanks. Originally posted by rosAS on ROS Answers with karma: 21 on 2014-03-07 Post score: 1
As far as I understand, ros is the meta-package that contains several packages, https://github.com/ros/ros/blob/indigo-devel/ros/package.xml, how do we not that which package does the ros contains? k-okada@kokada-t430s:/opt/ros/hydro/share$ rospack depends ros [rospack] Error: no such package ros k-okada@kokada-t430s:/opt/ros/hydro/share$ rosstack depends ros [rosstack] Error: no such package ros k-okada@kokada-t430s:/opt/ros/hydro/share$ rospack depends roslib catkin rospack Originally posted by Kei Okada on ROS Answers with karma: 1186 on 2014-03-07 Post score: 3 Original comments Comment by tfoote on 2014-03-07: As a side note: metapackages are only valid dependencies of other metapackages. They are mostly a convenience for users when installing.
Hi All, I'm sure I've seen that rviz has a virtual joystick plug-in. Any idea where I might find it (Publishing twist messages)? Looked around rviz/these answers and google and it doesn't seem obvious. (Unless I'm imagining things of course! :) ) Many Thanks Mark Originally posted by MarkyMark2012 on ROS Answers with karma: 1834 on 2014-03-07 Post score: 0
I have created a simple but functional/complete node for frontier exploration using the hydro costmap_2d layers, but I'm unsure how 'mature' something should be before I should be releasing it. I thought it was a huge pain that 'explore' was unmaintained, and all the other exploration packages were coupled to larger stacks. I've ad-hoc tested it pretty thoroughly, and would like to squeeze out some time in a month or so to implement some proper tests and documentation, but for now I'd just like to get the code out there and find out if anyone else is interested in using, contributing, pointing out glaring mistakes, or suggesting functionality. Is this ready to be released into the wild via bloom? https://github.com/paulbovbel/frontier_exploration Originally posted by paulbovbel on ROS Answers with karma: 4518 on 2014-03-07 Post score: 0 Original comments Comment by ahendrix on 2014-03-07: This looks awesome!
hi there! I have been trying to solve the building errors the whole day,but it turned out to be in vain.In the pcl/tutorial page,I have followed the tutorial step by step correctly,but when i use "catkin_make" command to build the package,numerous errors jumped out!I don't know whether the tutorial has some defects or my program exists some faults.please give me a hand,thank you! program as following: #include <ros/ros.h> #include <sensor_msgs/PointCloud2.h> #include <pcl_conversions/pcl_conversions.h> #include <pcl/point_cloud.h> #include <pcl/point_types.h> #include <pcl/filters/voxel_grid.h> #include <pcl/conversions.h> #include <pcl/PCLPointCloud2.h> ros::Publisher pub; void cloud_cb(const sensor_msgs::PointCloud2ConstPtr& cloud) { // pcl::PCLPointCloud2 pcl_pc; //pcl::PointCloud<pcl::PointXYZ> cloud_out; sensor_msgs::PointCloud2 cloud_filtered; //pcl_conversions::toPCL(*cloud,pcl_pc); // pcl::fromROSMsg (pcl_pc,cloud_in); pcl::VoxelGrid<sensor_msgs::PointCloud2> sor; sor.setInputCloud(*cloud); sor.setLeafSize(0.01,0.01,0.01); sor.filter(cloud_filtered); // pcl::fromPCLPointCloud2(pcl_pc,cloud_out); pub.publish(cloud_filtered); } int main(int argc,char**argv) { ros::init(argc,argv,"my_pcl_tutorial"); ros::NodeHandle nh; ros::Subscriber sub=nh.subscribe("cloud2",1,cloud_cb); pub=nh.advertise<sensor_msgs::PointCloud2> ("cloud_filtered",1); ros::spin(); } Originally posted by keygeorge on ROS Answers with karma: 18 on 2014-03-07 Post score: 0 Original comments Comment by Martin Peris on 2014-03-09: We could better help you if you attach the errors that you encounter :)
Hi, I just wanted to know if there is some example on getting sonar and bumper data from a pioneer3dx using p2os, I can't find anything. I thought it was like getting pose data but now I think it's nothing like that. Can you help me? It seems like p2os does not offer access to bumper data, but is it possible in any way? Thanks in advance. Originally posted by NullX4 on ROS Answers with karma: 25 on 2014-03-08 Post score: 1
Hi all, I have a question regarding the usage of ar_pose with kinect. So, what I did so far. I rosmake the ar_pose node and to make everything run I execute the following commands: roslaunch openni_launch openni.launch (for kinect) and roslaunch ar_pose ar_pose_single.launch In the list of topics I have /ar_marker, /world and topics related to the camera. I tried choosing different topics for fixed and target frames as well as for camera topic. I am getting an image from the camera, but the target (I use standard pattHiro.pdf mentioned in ros wiki for ar_pose) is not recognized. I also tried changing options for my kinect in launch file, but without success. Here is the content of the launch file for ar_pose: <launch> <arg name="kinect" default="false"/> <param name="use_sim_time" value="false"/> <node pkg="rviz" type="rviz" name="rviz" args="-d $(find ar_pose)/launch/live_single.vcg"/> <node pkg="tf" type="static_transform_publisher" name="world_to_cam" args="1 1 0.3 0 0 0 world ar_marker 10" /> <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" respawn="false" output="log"> <param name="video_device" type="string" value="/dev/video1"/> <param name="camera_frame_id" type="string" value="usb_cam"/> <param name="io_method" type="string" value="mmap"/> <param name="image_width" type="int" value="640"/> <param name="image_height" type="int" value="480"/> <param name="pixel_format" type="string" value="mjpeg"/> <rosparam param="D">[0.025751483065329935, -0.10530741936574876,-0.0024821434601277623, -0.0031632353637182972, 0.0000]</rosparam> <rosparam param="K">[558.70655574536931, 0.0, 316.68428342491319, 0.0, 553.44501004322387, 238.23867473419315, 0.0, 0.0, 1.0]</rosparam> <rosparam param="R">[1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]</rosparam> <rosparam param="P">[558.70655574536931, 0.0, 316.68428342491319, 0.0, 0.0, 553.44501004322387, 238.23867473419315, 0.0, 0.0, 0.0, 1.0, 0.0]</rosparam> </node> <node name="ar_pose" pkg="ar_pose" type="ar_single" respawn="false" output="screen"> <param name="marker_pattern" type="string" value="data/patt.hiro"/> <param name="marker_width" type="double" value="80.0"/> <param name="marker_center_x" type="double" value="0.0"/> <param name="marker_center_y" type="double" value="0.0"/> <param name="threshold" type="int" value="100"/> <param name="use_history" type="bool" value="true"/> <remap if="$(arg kinect)" from="/usb_cam/image_raw" to="/camera/rgb/image_raw"/> <remap if="$(arg kinect)" from="/usb_cam/camera_info" to="/camera/rgb/camera_info"/> <remap unless="$(arg kinect)" from="/usb_cam/image_raw" to="/wide_stereo/left/image_rect_color"/> <remap unless="$(arg kinect)" from="/usb_cam/camera_info" to="/wide_stereo/left/camera_info"/> </node> </launch> Could anyone please help me with settings needed for kinect to recognize ar target in ar_pose and topics I should subscribe to? Thank you Originally posted by Andrii Matviienko on ROS Answers with karma: 56 on 2014-03-09 Post score: 0
I want to integrate a state machine using the smach libraryto my robot but when I tried running the first tutorial I got this error message/ ImportError: No module named smach/ On the documentation page I found no installation instructions. Does smach come standard with groovy and I am importing wrong or does it need to be install it separately when using groovy? Originally posted by Massbuilder on ROS Answers with karma: 71 on 2014-03-09 Post score: 0
I followed the tutorial here => wiki.ros.org/navigation/Tutorials/RobotSetup/TF on the TF (sorry my karma doesn't allow me to post link) and I had those lines at the end of the make file add_executable(tf_broadcaster src/tf_broadcaster.cpp) add_executable(tf_listener src/tf_listener.cpp) target_link_libraries(tf_broadcaster ${catkin_LIBRARIES}) target_link_libraries(tf_listener ${catkin_LIBRARIES}) Then I used catkin_make that compile without a problem but when I use rosrun robot_setup_tf tf_broadcaster I have in my terminal this [rosrun] Couldn't find executable named tf_broadcaster below /home/ros/catkin_ws/src/robot_setup_tf After a few search it appears that the file is actually situated here /home/ros/catkin_ws/devel/lib/robot_setup_tf I can launch it manually and it's working perfectly, it just that I can't understand why it's there and not in the src folder of the robot_setup_tf. If someone could explain to me. Originally posted by Maya on ROS Answers with karma: 1172 on 2014-03-09 Post score: 1
Recently, I discovered a process on my ubuntu that was eating away at both processor time and memory, 18% and 10% respectively. Avahi-daemon and I got rid of it along with pulseaudio. It seems to me that there is no need for avahi-daemon which is a zeroconf derivative in itself. If this process caused issues, isn't well maintained, and eats up precious resources then why is it being utilized in ROS? All the way to Hydro, I see zeroconf. I read the discription provided by the ROS wiki, and checked Archlinux for zeroconf and avahi. I still don't see what purpose it serves because I don't see it being a dependency for anything other than ros-hydro-turtlebot-bringup. What purpose does zeroconf and avahi serve when the amount of resources it consumes is detrimental in a resource scarce scenario such as most robot systems, e.g. 7-DOF manipulators, vision-based path planning/navigation, etc? I am trying to understand what, why, and how of this package/stack. I feel like it should be deprecated. Originally posted by paresh471 on ROS Answers with karma: 86 on 2014-03-09 Post score: 2
Hello all! I can't compiling package katana_description. I have this error: [ rosmake ] Last 40 linestana_description: 3.5 sec ] [ 1 Active 42/43 Complete ] {------------------------------------------------------------------------------- -- Using CMAKE_PREFIX_PATH: /home/max/hydro_ws/devel;/opt/ros/hydro -- This workspace overlays: /home/max/hydro_ws/devel;/opt/ros/hydro -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Using Debian Python package layout -- Using CATKIN_ENABLE_TESTING: ON -- Skip enable_testing() for dry packages -- Using CATKIN_TEST_RESULTS_DIR: /home/max/hydro_ros_ws/katana_driver/katana_description/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built -- catkin 0.5.81 [rosbuild] Including /opt/ros/hydro/share/roslisp/rosbuild/roslisp.cmake [rosbuild] Including /opt/ros/hydro/share/roscpp/rosbuild/roscpp.cmake [rosbuild] Including /opt/ros/hydro/share/rospy/rosbuild/rospy.cmake -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: CMAKE_TOOLCHAIN_FILE -- Build files have been written to: /home/max/hydro_ros_ws/katana_driver/katana_description/build cd build && make -j2 -l2 make[1]: Entering directory `/home/max/hydro_ros_ws/katana_driver/katana_description/build' make[2]: Entering directory `/home/max/hydro_ros_ws/katana_driver/katana_description/build' make[3]: Entering directory `/home/max/hydro_ros_ws/katana_driver/katana_description/build' make[3]: Leaving directory `/home/max/hydro_ros_ws/katana_driver/katana_description/build' make[3]: Entering directory `/home/max/hydro_ros_ws/katana_driver/katana_description/build' [ 3%] [ 3%] Generating ../meshes/katana/convex/katana_motor2_lift_link.obj Generating ../meshes/katana/convex/katana_gripper_l_finger.obj make[3]: ivcon: Command not found make[3]: ivcon: Command not found make[3]: *** [../meshes/katana/convex/katana_gripper_l_finger.obj] Error 127 make[3]: *** Waiting for unfinished jobs.... make[3]: *** [../meshes/katana/convex/katana_motor2_lift_link.obj] Error 127 make[3]: Leaving directory `/home/max/hydro_ros_ws/katana_driver/katana_description/build' make[2]: *** [CMakeFiles/media_files.dir/all] Error 2 make[2]: Leaving directory `/home/max/hydro_ros_ws/katana_driver/katana_description/build' make[1]: *** [all] Error 2 make[1]: Leaving directory `/home/max/hydro_ros_ws/katana_driver/katana_description/build' I already install package invoc from https://github.com/ros/ivcon , but it can't halpe me. (ubuntu 12.04, hydro) How I can fix it ? Originally posted by seredin on ROS Answers with karma: 45 on 2014-03-09 Post score: 0
Hello, I have some Javascript code that uses rosbridge_suite (Hydro debian install under Ubuntu 12.04) and roslibjs (latest version) to read in a parameter value from the rosbridge server and then set its value in a form field. It seems I don't understand the timing of when the parameter is read or the sequence in which Javascript functions are executed (or both) because I'm not getting the result I expected. My code (included below) does the following: sets a default value for the parameter reads the parameter from the rosbridge server and, if not null, overrides the default value sets the value of a form field to the parameter value What I see on my form is that I always get the default value, not the value from the parameter server. If I put some log statements in my script, I see that the form value is getting updated before the parameter value is read via roslib even though the roslib code comes first. I am also verifying that the value is being read from the parameter server correctly. Here now is the relevant snippet: var maxLinearSpeed = 0.2; // Create a Param object for the max linear speed var maxLinearSpeedParam = new ROSLIB.Param({ ros : ros, name : '/maxLinearSpeed' }); // Get the value of the max linear speed paramater maxLinearSpeedParam.get(function(value) { if (value != null) { maxLinearSpeed = value; console.log(maxLinearSpeed); } }); var formElement = document.getElementById('maxLinearSpeedDisplay'); formElement.innerHTML = maxLinearSpeed; So if I set the value of the maxLinearSpeed parameter to 0.5 on the rosbridge server, then run my script, I always see the default value of 0.2 in the form field labeled 'maxLinearSpeedDisplay' instead of 0.5 as I expected. Yet the statement console.log(value) above displays the correct value of 0.5. Any idea what I am doing wrong? Thanks, patrick Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2014-03-09 Post score: 1
Hello all, I'm trying to read values from base_link frame in order to get my robot's coordinates with the following code: tf::TransformListener listener; tf::StampedTransform transform; try { listener.lookupTransform("/base_link", "/odom", ros::Time(0), transform); } catch (tf::TransformException ex) { ROS_ERROR("%s", ex.what()); } However, all I get is an error. [ERROR] [1394401426.907398394, 36.516000000]: Frame id /base_link does not exist! Frames (1) I can visualize base_link in rviz or read it's values on terminal using rostopic echo so I'm pretty sure it exists. Any help is much appreciated. Originally posted by SpiderRico on ROS Answers with karma: 35 on 2014-03-09 Post score: 0
Hi, Which would be the recommended quadrotor for using hector_slam. I am currently trying the nicely documented tutorial : hector_quadrotor ( link ) . I was wondering on which all quadrotor everyone is using the hector_slam package. While looking for this answer, I came across these links: http://answers.ros.org/question/9101/which-quadrotor-has-the-best-ros-support/ http://answers.ros.org/question/12412/autonomously-flying-quadrotor/ http://wiki.ros.org/MikroKopter/Tutorials/shared_control http://wiki.ros.org/Robots#UAVs http://www.ros.org/news/robots/uavs/ I could not figure out which quadrotor was used for development and testing of Hector SLAM. So with my finding, my best option (might be only) would be Ascending Technologies Pelican. Has anyone tried using Hector SLAM on Pelican ? -Suraj Originally posted by ss_robotics on ROS Answers with karma: 75 on 2014-03-09 Post score: 0 Original comments Comment by ahendrix on 2014-03-09: According to their paper, hector_slam was developed on a ground vehicle.
Greetings! I am soon starting up my new hobby project: a SLAM-robot with no clear goal yet (part from navigation). All the parts have arrived but I need some advice regarding what stack to start with, as I do not want to assemble all software components from scratch. I will start of by presenting the list of hardware. CPU-board: Hardkernel ODROID-U3 Controller board: Arduino plug-in board for ODROID-U3 Motors: Two pretty fat stepper motors Stepper motor drivers: Two L298N based stepper motor driver boards Sensor: Microsoft Kinect The robot will be assembled on a frame built of sheet aluminium together with a 360º turning nosewheel and airplane wheels for the stepper motors. The Kinect sensor will be mounted on the top of the aluframe. Now there are several issues that I need to address and I hope that the ROS community can help me with some of them or at leaste provide som useful suggestions. The ODROID officially supports releases of ubuntu 13.10 and up. As I understand the latest Ubuntu officially supported by ROS is 13.04. Obviously there is a gap here and I have the options of installing a non-supported OS on the ODROID, or compiling ROS from source - I am not very impressed by any of those two solutions. Another option is to run Android but I have a bad feeling about that since it seems very experimental. I do not want to build my application from scratch as this is a waste of time imho. What I need to figure out is which pre-assembled stack to use in order to get as close as possible to my particular setup. I'd be happy if the only thing I'd actually need to implement in code is the stepper motor interface on the arduino. What is the recommended workflow for developing the robot application? SSH in and have the build environment located on the ODROID? Cross compile on a PC? Finally, I am interested in some cool applications for the robot. I.e. vacuum cleaner, spy etc. Let's get this discussion started! /Simon Originally posted by aerkenemesis on ROS Answers with karma: 21 on 2014-03-09 Post score: 0
What I tried : $ sudo sh -c 'echo "deb packages.ros.org/ros/ubuntu precise main" > /etc/apt/sources.list.d/ros-latest.list' $ sudo apt-get install ros-hydro-knowrob Error : E: Unable to locate package ros-hydro-knowrob Note sudo apt-get install ros-groovy-knowrob works fine, but I got hydro. Originally posted by ratneshmadaan on ROS Answers with karma: 71 on 2014-03-10 Post score: 0
I am following this tutorial wiki.ros.org/rosbag/Tutorials/Exporting%20image%20and%20video%20data ,but "rosmake image_view --rosdep-install" gives an error "rosmake: error: no such option: --rosdep-install". Any help will be greatly appreciated. Originally posted by akash on ROS Answers with karma: 1 on 2014-03-10 Post score: 0
I 'm using ROS and I'm trying to follow the tutorials on how to export image and video data from a bag file.I keep having this error.When inserting the " rosmake image_view --rosdep-install".What do I lack? Originally posted by Leyonce on ROS Answers with karma: 97 on 2014-03-10 Post score: 0
I want to use laser data in my program(I want C++ code) , I can see ranges data of laser scanner with #rostopic echo /scan/ranges but how can i use it in my codes. Originally posted by programmer on ROS Answers with karma: 61 on 2014-03-10 Post score: 0
I'm trying to install ROS (ros-hydro-desktop-full), and every time it gets stuck at the same specific place: Err http: //packages.ros.org/ros/ubuntu/ precise/main ros-hydro-opencv2 amd64 2.4.6-3precise-20140130-1852-+0000 Connection failed [IP: 64.50.236.52 80] I try pinging the reported IP and it works fine. I have followed the exact same steps as mentioned in the Installation Guide- and the most of the other packages get downloaded fine. This one, and a few more don't: Err http: //archive.ubuntu.com/ubuntu/ precise/main libboost1.46-dev amd64 1.46.1-7ubuntu3 Connection failed [IP: 91.189.92.200 80] Err http: //archive.ubuntu.com/ubuntu/ precise/main texlive-binaries amd64 2009-11ubuntu2 Connection failed [IP: 91.189.91.14 80] Err http: //archive.ubuntu.com/ubuntu/ precise-updates/main libicu-dev amd64 4.8.1.1-3ubuntu0.1 Connection failed [IP: 91.189.91.13 80] Err http: //archive.ubuntu.com/ubuntu/ precise-security/main libicu-dev amd64 4.8.1.1-3ubuntu0.1 Connection failed [IP: 91.189.91.15 80] It exits with "Unable to correct missing packages." Same story if I try installing these separately. Could someone please help? I'm using Ubuntu 12.04.4 (LTS), 64-bit, installed on a Windows partition using Wubi. I have updated and upgraded (many times, actually). Originally posted by a-Jays on ROS Answers with karma: 1 on 2014-03-10 Post score: 0
Hi everyone, what do I have to do to add custom message types for use with rosmatlab by TU Darmstadt? I tried to add a new directory called msg in workspace/src/rosmatlab/rosmatlab and placed a file called Num.msg inside of it. But examining the appropriate CMakeLists.txt file, I noticed all message being included with the function add_mex_messages followed by the message name in paranthesis. So does anybody know how to solve this? [Edit 2014-03-12] Okay I now created a new package outside of the rosmatlab workspace and defined a new message type there. Then I compiled it to make it accessible by the ROS system itself. Then I added a add_mex_messages() statement to the CMakeLists.txt in the rosmatlab workspace and a <build_depend /> tag in the package.xml to refer to that package containing the new message type. When I compiled the rosmatlab workspace with these settings everything went well, until cmake claimed a non-existent header file referred to by the C-MEX file of my message type. In fact the header file doesn't exist at all and I'm wondering why it hasn't been generated by rosmatlab or whatever if everything else has been generated. I'm sorry, if it is obvious to all of you, but it is neither documented nor obvious to me, so if anyone has anything to say about this topic, please let me know! Thanks in advance. Greets, Roberto Originally posted by bnm-rc on ROS Answers with karma: 22 on 2014-03-10 Post score: 0
Newbie warning. If this is a stupid question, please be patient. :-) I have installed ROS Hydro. I have run through all the beginners tutorials ... so I have created publisher nodes, subscriber nodes, message types, services, and clients. I have also worked through most of the Gazebo tutorials, so I feel like I understand how to write software on my robot and how to test that software using the simulator. But I have not come across anything yet about human operator graphical interfaces. What is the robot development community typically using to create their operator GUIs and how do those GUIs typically communicate with the robot? I have not found anything on this topic, so I'm assuming that there are no tools bundled with ROS to help me with this part of my robot software suite. Any help is greatly appreciated. Thanks in advance. Originally posted by Kurt Leucht on ROS Answers with karma: 486 on 2014-03-10 Post score: 0
When specifying the machine-tag for a ros-node in a launch-file one is principally able to run a ros-node on any reachable machine in the network that is configured for ros. Do I get this right? If yes, I am wondering about the following questions: How does this work when I developped the code for my node on machine A and I execute it on machine B. Somehow, the binary must be transmitted, doesn't it? What happens, if machine B contains a conflicting older version of the same node, which version will be executed? Given the case that a team of programmers develops ros-nodes on different machines and all nodes are finally executed using a launch-file on another machine - let's name it master-machine - that runs the roscore: Although they merge their changes using a VCS and this could theoretically be checkout and built on the master-machine in advance of every execution of the launch-file, is there a possibility to simply update the code on the master-machine (e.g. using roscp)? (the wiki told me to include this: http://wiki.ros.org/ROS/Tutorials/MultipleMachines) Originally posted by rilke on ROS Answers with karma: 35 on 2014-03-10 Post score: 3
Is there a way to have catkin_make print all warnings? I am using Jenkins to test the builds of packages under Ubuntu 12.04 and ROS Hydro. I would like to track all the warnings from the packages and need to have those printed out. The best solution for me would be a command line argument rather than editing existing CMakeLists.txt files to add a definition for -Wall. I've found that the first time I build packages the warnings are printed but after the first build the warnings are suppressed. I have looked at the output of catkin_make --help and none of the --cmake-args or --make-args seem to do what I want. Originally posted by Thomas D on ROS Answers with karma: 4347 on 2014-03-10 Post score: 2
How to publish the roll/pitch data in gmapping ?Thank you for your advices. Originally posted by Yuichi Chu on ROS Answers with karma: 148 on 2014-03-10 Post score: 0 Original comments Comment by dornhege on 2014-03-10: Do you want gmapping to use pitch/roll or do you want to get pitch/roll estimates out of gmapping. In either case: gmapping is set up to be a 2D algorithm. Comment by Yuichi Chu on 2014-03-11: I use gmapping to create 2D map.But the ground is not flat.So I think maybe I need to publish roll and pitch and I want gmapping to use the data.I am confused about whether I should use a node to publish roll/pitch in some format like message or I only need to add pitch/roll to TF frames. Comment by Yuichi Chu on 2014-03-11: In other word, how can I let the gmapping know that there is roll/pitch data to use?Thank dornhege for your attention.Would you give me some advices?
Hi, I want to use dynamic_reconfigure in one of my nodes, however, one of my parameters is actually a variably-sized array of doubles. Can dynamic_reconfigure support variably-sized arrays, or does it only handle name-value pairs and structures? Would a service with custom message type be a better solution for setting the array? Thank you for your help. Originally posted by trianta2 on ROS Answers with karma: 293 on 2014-03-10 Post score: 2
Hi everyone. I'm doing this thing (youbot ros-hydro-wrapper-for-kuka-youbot) trying to make youbot by Kuka working and there's an error on the last "$ catkin_make". $ cd ~/catkin_ws/src $ git clone github.com /youbot/youbot_driver_ros_interface.git -b hydro-devel $ cd .. $ catkin_make $ sudo setcap cap_net_raw+ep devel/lib/youbot_driver_ros_interface/youbot_driver_ros_interface it's my first time with Ubuntu and ROS, so I don't really know what gone wrong. CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a package configuration file provided by "brics_actuator" with any of the following names: brics_actuatorConfig.cmake brics_actuator-config.cmake Add the installation prefix of "brics_actuator" to CMAKE_PREFIX_PATH or set "brics_actuator_DIR" to a directory containing one of the above files. If "brics_actuator" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): youbot_driver_ros_interface/CMakeLists.txt:5 (find_package) -- Configuring incomplete, errors occurred! make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed Originally posted by runout on ROS Answers with karma: 3 on 2014-03-10 Post score: 0
It seems that my turtlebot is not working right with amcl. I don't know what it is. Rviz shows my map, and turtlebot, and even what I think is feedback from the laser. (There are colorful clouds of light around what I suspect is the solid objects in the lasers path). I can also use my '/initalpose' publishing node to publish a starting position for the robot. This seems to work. I can watch rviz and see the robot move to a new position (...if I publish to /initialpose repeatedly). I cannot get any response when I send the robot goal coordinates. I have tried '/move_base_simple/goal' and I have written code that sends coordinates to '/move_base/goal', that code using the Move Base Action Subscribed Topic. For me nothing works. My thoughts are that maybe the robot isn't localizing for the first time when I tell it where it is. Maybe there's a setting in amcl that adjusts the level to which the room map must match the laser scan in order for the initial localisation to take. Could this be? Does anyone have any clues about this?? EDIT: Here's more info. this is my rapp interface file. Maybe I've messed that up. publishers: - scan - /tf - tf_changes - /tf_changes - map - amcl_pose - move_base/TrajectoryPlannerROS/global_plan subscribers: - mobile_base/commands/velocity - initialpose - /initialpose - move_base_simple/goal - /move_base_simple/goal - move_base/goal - /move_base/goal - /move_base/goal/goal services: - save_map - rename_map - delete_map - publish_map - list_maps I can also post rapp files and launch files. Looking at the interface file, I don't know when to include a leading '/' slash when identifying topics. Originally posted by david.c.liebman on ROS Answers with karma: 125 on 2014-03-10 Post score: 0 Original comments Comment by jihoonl on 2014-03-10: Can you post the error message from the node? Comment by david.c.liebman on 2014-03-11: there is no error message on the laptop, and there's none on the computer on the robot, at least not a visible one. I looked in the .ros/log/ folder and under the topic 'app_manager-application-amcl' there was nothing. Is there somewhere else I should look? Comment by Ken_in_JAPAN on 2014-04-03: Your problem seems to be same as one I have experienced. In my case, the problem is to forget to write a IP_Address of workstation in /etc/hosts on my turtlebot2 PC. So, I could move my turtlebot2 with Rviz. That means turtlebot2 couldn't receive goal message without IP_Address of workstation. Comment by jihoonl on 2014-04-03: can you start app manager with --screen option? it should log out app manager output via terminal.
This question is related to http://answers.ros.org/question/64702/overlaying-multiple-catkin-devel-spaces-at-the-same-time/ but it seemed better to ask separately than to hijack that thread. I'm using catkin as a build tool from source. Basically, I want to have a base workspace that just contains catkin so that I can chain workspaces from there by using the setup.bash file. However, every catkin package (including catkin itself) depends on catkin_pkg. Since we are using catkin_pkg from source as well, I was hoping to be able to manually point to the catkin_pkg python files while configuring the catkin workspace, and have those files available in chained workspaces as well. I tried the following to "build" catkin: export PYTHONPATH=/path_to_custom_install_dir_for/catkin_pkg/src:$PYTHONPATH cmake .. (with appropriate build and install dirs set) make But, when I source the setup.bash file that is created, the PYTHONPATH only has catkin, not my custom path to catkin_pkg. Am I misunderstanding what about "the environment in which it was created" is pulled in when sourcing the setup file? Or, is there a smarter way to set up catkin + catkin_pkg from source in user-space? (I don't want to install it via apt-get or pip). EDIT based on comments below: Say I have workspaces A, B, C. Now I: add catkin_pkg to my PYTHONPATH source ws_A/devel/setup.bash (PYTHONPATH now has both catkin_pkg and the ws_A stuff) build ws_B. In a new terminal, I source ws_B/devel/setup.bash, and then try to build ws_C. Should I expect to see catkin_pkg on the PYTHONPATH in the new terminal? I think that's what I'm doing now, but the catkin_pkg that was on the PYTHONPATH when building B doesn't get passed along through the setup.bash. It's fine if the answer is "that doesn't work", I just want to be sure before I pick a different design. Originally posted by aleeper on ROS Answers with karma: 573 on 2014-03-10 Post score: 0 Original comments Comment by William on 2014-03-10: That is the case, the PYTHONPATH will not be preserved by the setup file. The setup file will add any underlaid workspaces in your environment to your PYTHONPATH, but if catkin_pkg is not installed into one of these "underlays" then it will not get set by the generated setup files.
Hi everyone! I'm in the middle of building Hydro from Source on an ARM board. I got to building the 'tf2_bullet' catkin package and ran into the following error: ==> Processing catkin package: 'tf2_bullet' ==> Creating build directory: 'build_isolated/tf2_bullet' ==> Building with env: '/opt/ros/hydro/env.sh' ==> cmake /opt/ros/hydro/ros_catkin_ws/src/geometry_experimental/tf2_bullet -DCATKIN_DEVEL_PREFIX=/opt/ros/hydro/ros_catkin_ws/devel_isolated/tf2_bullet -DCMAKE_INSTALL_PREFIX=/opt/ros/hydro in '/opt/ros/hydro/ros_catkin_ws/build_isolated/tf2_bullet' -- The C compiler identification is GNU -- The CXX compiler identification is GNU -- Check for working C compiler: /usr/bin/gcc -- Check for working C compiler: /usr/bin/gcc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- checking for module 'bullet' -- package 'bullet' not found CMake Error at /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:266 (message): A required package was not found Call Stack (most recent call first): /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:320 (_pkg_check_modules_internal) CMakeLists.txt:7 (pkg_check_modules) -- Using CATKIN_DEVEL_PREFIX: /opt/ros/hydro/ros_catkin_ws/devel_isolated/tf2_bullet -- Using CMAKE_PREFIX_PATH: /opt/ros/hydro -- This workspace overlays: /opt/ros/hydro -- Found PythonInterp: /usr/bin/python (found version "2.7.3") -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Python version: 2.7 -- Using Debian Python package layout -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /opt/ros/hydro/ros_catkin_ws/build_isolated/tf2_bullet/test_results -- Looking for include files CMAKE_HAVE_PTHREAD_H -- Looking for include files CMAKE_HAVE_PTHREAD_H - found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found gtest sources under '/usr/src/gtest': gtests will be built -- catkin 0.5.86 -- Using these message generators: gencpp;genlisp;genpy -- Configuring incomplete, errors occurred! <== Failed to process package 'tf2_bullet': Command '/opt/ros/hydro/env.sh cmake /opt/ros/hydro/ros_catkin_ws/src/geometry_experimental/tf2_bullet -DCATKIN_DEVEL_PREFIX=/opt/ros/hydro/ros_catkin_ws/devel_isolated/tf2_bullet -DCMAKE_INSTALL_PREFIX=/opt/ros/hydro' returned non-zero exit status 1 Reproduce this error by running: ==> cd /opt/ros/hydro/ros_catkin_ws/build_isolated/tf2_bullet && /opt/ros/hydro/env.sh cmake /opt/ros/hydro/ros_catkin_ws/src/geometry_experimental/tf2_bullet -DCATKIN_DEVEL_PREFIX=/opt/ros/hydro/ros_catkin_ws/devel_isolated/tf2_bullet -DCMAKE_INSTALL_PREFIX=/opt/ros/hydro Command failed, exiting. Anyone know a solution to this? Can I install 'bullet' from source somehow? Thanks! Originally posted by AndrewLawson on ROS Answers with karma: 11 on 2014-03-10 Post score: 0
I 'm using ROS and I'm trying to follow the tutorials on how to export image and video data from a bag file.I keep having this error When I execute roslaunch export.launch I got this error > [FATAL] [1394490530.406789364]: Error opening file: play [rosbag-1] process has died [pid 14368, exit code 1, cmd /opt/ros/hydro/lib/rosbag/play play -d 2 /home/khmarehman/test.bag __name:=rosbag __log:=/home/khmarehman/.ros/log/7b66fb3a-a896-11e3-b8eb-f4b7e2c88a9b/rosbag-1.log]. log file: /home/khmarehman/.ros/log/7b66fb3a-a896-11e3-b8eb-f4b7e2c88a9b/rosbag-1*.log My code <launch> <node pkg="rosbag" type="play" name="rosbag" args="play -d 2 /home/khmarehman/test.bag"/> <node name="extract" pkg="image_view" type="extract_images" respawn="false" output="screen" cwd="node"> <remap from="image" to="/camera/image_raw"/> </node> </launch> UPDATE: I change my code to this <launch> <node pkg="rosbag" type="play" name="rosbag" args="-d 2 /home/khmarehman/test.bag"/> <node name="extract" pkg="image_view" type="extract_images" respawn="false" output="screen" cwd="ROS_HOME"> <remap from="image" to="/camera/image_raw"/> </node> </launch> Now it got stuck at core service [/rosout] found process[rosbag-1]: started with pid [15230] process[extract-2]: started with pid [15242] [ INFO] [1394492914.265667303]: Initialized sec per frame to 0.100000 UPDATE: I change the code <launch> <node pkg="rosbag" type="play" name="rosbag" args="play -d 2 /home/khmarehman/test.bag"> <node name="extract" pkg="image_view" type="extract_images" respawn="false" output="screen" cwd="node"> <remap from="image" to="/front/camera/image_rect_color/compressed"> </remap> </node> </node> </launch> still having this error WARNING: WARN: unrecognized 'node' tag in <node> tag. Node xml is <node args="play -d 2 /home/khmarehman/test.bag" name="rosbag" pkg="rosbag" type="play"> <node cwd="node" name="extract" output="screen" pkg="image_view" respawn="false" type="extract_images"> <remap from="image" to="/front/camera/image_rect_color/compressed"> </remap></node></node> [FATAL] [1394533222.650188690]: Error opening file: play [rosbag-1] process has died [pid 20032, exit code 1, cmd /opt/ros/hydro/lib/rosbag/play play -d 2 /home/khmarehman/test.bag __name:=rosbag __log:=/home/khmarehman/.ros/log/2b6a1e2e-a903-11e3-aca7-f4b7e2c88a9b/rosbag-1.log]. log file: /home/khmarehman/.ros/log/2b6a1e2e-a903-11e3-aca7-f4b7e2c88a9b/rosbag-1*.log BAG info path: test.bag version: 2.0 duration: 2:10s (130s) start: Feb 03 2014 07:21:42.82 (1391394102.82) end: Feb 03 2014 07:23:53.55 (1391394233.55) size: 47.3 MB messages: 1056 compression: none [61/61 chunks] types: sensor_msgs/CompressedImage [8f7a12909da2c9d3332d540a0977563f] topics: /front_camera/camera/image_rect_color/compressed 1056 msgs : sensor_msgs/CompressedImage Originally posted by Mechatronics on ROS Answers with karma: 11 on 2014-03-10 Post score: 1 Original comments Comment by ahendrix on 2014-03-10: Is there anything in the logs that it mentions? Have you tried running the same command by hand? Comment by Mechatronics on 2014-03-10: Sorry, I couldn't understand, what do you mean by hand? Comment by Mechatronics on 2014-03-10: My updated code worked, But I couldn't find images, they are not in .ros Comment by ahendrix on 2014-03-10: Rather than running those commands as part of a launch file, try running each command via rosrun and examining the output. Comment by Mechatronics on 2014-03-10: I am following this tutorial http://wiki.ros.org/rosbag/Tutorials/Exporting%20image%20and%20video%20data Comment by ahendrix on 2014-03-10: I suspect you're not seeing any images because there aren't any images in your bag file on the topic you've specified. You should be able to see which topics are in your bag file, what their types are, and how many messages were recorded on each topic by running rosbag info on your bag file. Comment by Mechatronics on 2014-03-11: I have edited and posted bag info
Hi all, I am fairly new to ROS and Gazebo and I apologize if this question seems easy in advance. So, I basically want to implement a path planning algorithm by simulating a scenario in "ROS-Gazebo" and I am using "ROS Hydro-Gazebo 1.9" distributions. My algorithm assumes that the robot is omni-directional i.e., it can move in both x and y coordinates, independently. As I have found out, "pr2" and "care-o-bot" are two robots that offer omni-directional motion. The only feature that I need from them is the omni-directional wheels and other features such as laser, arm, etc. are not needed. I would also need to get the odometry data. Problem: I am not aware of any control plugin or library to control and move the "pr2" in "ROS Hydro-Gazebo 1.9". I am able to bring up the "pr2" in "Gazebo" using "roslaunch" command. But I cannot control it and move it around. To give an idea of what I want, in another case I am able to control a pioneer2dx using a differential drive controller plugin (libgazebo_ros_diff_drive.so). I wonder if such a thing exists for "pr2". If not, is there any other solutions to my problem? I would greatly appreciate any helps towards finding a solution for my problem. Thanks, Ali Originally posted by alimohandes on ROS Answers with karma: 1 on 2014-03-10 Post score: 0
Hello all, I followed the tutorial on stage for simulating one robot but in the stage stack there is not such file as a stage.rviz. Only a stage.vcg which according to the error prompted is the old extension file. Where can I find the stage.rviz I can't see to find it anywhere. Second, I have a URDF file for my robot. How can I load it in Stage using Rviz ? I know that Rviz is going to use it if I launch joint_state_publisher and launch the robot_state_publisher node with my URDF in argument. But will it works if I use that command ? roscd stage rosrun rviz rviz -d `rospack find stage_ros`/rviz/stage.rviz I would try it myself but I just can't find a stage.rviz to load and try for now. I just don't really quite see how to combine everything here. What I want to achieve is to use a stage world for the simulation in Rviz as I want to design a path finding robot. I just want to give him some sort of "world to play with". Thanks a lot. Originally posted by Maya on ROS Answers with karma: 1172 on 2014-03-10 Post score: 0 Original comments Comment by Jackel Fox on 2014-06-22: Did you figure this out?
Hello to all! A few months ago, I connected Arduino to ROS fuerte using rosserial without problems. However, now I'm trying to reproduce it in ROS groovy and I always get the same error. I am working with Ubuntu 12.04, ROS Groovy, Arduino 1.0.5, rosserial-groovy-devel and both Arduino Uno and Arduino Mega. I do these commands: Window 1: sudo ./Arduino/arduino Open Hello World example. Select Controller Arduino Uno/Mega. Select Serial Port /dev/ttyACM0. Load code. Window 2: roscore Window 3: sudo chmod a+rw /dev/ttyACM0 rosrun rosserial_python serial_node.py /dev/ttyACM0 And I get this error: [INFO] [WallTime: 1394533587.406466] ROS Serial Python Node [INFO] [WallTime: 1394533587.419060] Connecting to /dev/ttyACM0 at 57600 baud [ERROR] [WallTime: 1394533590.332723] Creation of publisher failed: unpack requires a string argument of length 4 [ERROR] [WallTime: 1394533591.328157] Tried to publish before configured, topic id 125 Anyone can help me? Thanks in advance! Originally posted by Jota on ROS Answers with karma: 13 on 2014-03-11 Post score: 0
REP 103 defines the standard for rotation representation as quaternions, but the covariance matrix associated to it is given in terms of fixed axis rotations. In my work, I use quaternions directly for attitude estimation and obtain covariances in terms of the quaternion parameters. How can I convert these covariances to the convention used by ROS, so they can be published appropriately? PS: I have searched ROS Answers and found a few discussions briefly addressing this topic, but usually referring to external documents or additional packages. I think it would be useful to have a single definitive reference for this here, so that implementations don't depend on the reader's interpretation of what is expected. Originally posted by georgebrindeiro on ROS Answers with karma: 1264 on 2014-03-11 Post score: 3 Original comments Comment by georgebrindeiro on 2014-04-27: @tfoote @William, any chance you guys could help me out with this one?
Hi, I am using ROS hydro on Ubuntu 12.04. After using the command "rosrun image_view extract_images _sec_per_frame:=0.01 image:={IMAGE_TOPIC_IN_BAGFILE}" I'm having an error when executing"ffmpeg -r -b -i frame%04d.jpg .avi" The thing is I am not seeing the sequence of images after I play the bagfile.I ran "rosbag info" to obtain the topic in the bag file but when I run "rosrun rqtgraph rqtgraph" the /extract_image frame node is there all by itself. When I follow the tutorials on http://wiki.ros.org/rosbag/Tutorials/Exporting%20image%20and%20video%20data I get an error "[FATAL] [1394583654.335465840]: Error opening file: play".After replacing play by "--clock" or just removing "play" everything seemed to go just fine but there are no frames*.jpg in /home/.ros. I feel like I'm lacking something.Can anyone help? Originally posted by Leyonce on ROS Answers with karma: 97 on 2014-03-11 Post score: 1
Hi all, I am simulating gmapping in Gazebo with my Turtlebot. I have attached an Hokuyo laser on my (virtual) turtlebot and want to use this for gmapping instead of the virtual kinect laser. The problem I have is that the virtual Hokuyo returns measurements at its maximum range. E.g. it would return a circle of measurment points of it would be placed in an empty world. Gmapping uses these measurements as if obstacles are detected there. This completely messes up the scan matching process, as it tries to match everything with the non-existent circular wall. If I use the Hokuyo on my real robot, it only returns true measurements, i.e. if it cannot detect something at a certain angle, it will not return any data point (as if the distance seems infinite). How should I adapt my virtual Hokuyo to not return data points when it has not detected anything at its max range? Or alternatively, is there any way to adapt gmapping to cope with such measurements (which less elegant I think, as the difference between simulation and real experiments persists in other fields of measurement usage)? Please see the screenshots attached: rviz clearly shows the 'circular non-existent' walls. I have only turned around in circles for this screenshot. If I would also move around with the robot, the scan-matching trouble becomes clear. I also have these issues (but to a lesser extend) when using it in a gazebo indoor building world (willowgarage world). Gazebo world (with laser visualized): rviz result: Any help would be very much appreciated. Regards, Koen UPDATE: A slight addition to my comment on dornheges answer: the maxRange can probably better be omitted, as skipped/missing measurements lead to the conclusion that certain space is empty, although the robot cannot actually see this. See for example the explored space outside of the room in the screenshot... Clearly, the robot couldnt know this: it is based on the assumption that no measurment equals free space. Does anybody know a nice way to still use this 'space is free if nothing is measured' feature, without running into this glitch (i.e. make it robust against skipped/missing/erroneous measurements)? Originally posted by koenlek on ROS Answers with karma: 432 on 2014-03-11 Post score: 0
I'm trying to contribute a package to ROS. This is the error that jenkins.ros.org gave back: /usr/bin/ld: cannot find -llapack collect2: ld returned 1 exit status I'm assuming that this is a dependency that needs to be added with a rosdep rule in the manifest. So, I've added the following line to package.xml: Is this correct? Originally posted by atp on ROS Answers with karma: 529 on 2014-03-11 Post score: 0
Hi, I keep getting this error when running catkin_make /opt/ros/hydro/share/catkin/cmake/em/order_packages.cmake.em:23: error: <type 'exceptions.AttributeError'>: 'str' object has no attribute 'name' Traceback (most recent call last): File "/usr/bin/empy", line 3288, in <module> if __name__ == '__main__': main() File "/usr/bin/empy", line 3286, in main invoke(sys.argv[1:]) File "/usr/bin/empy", line 3269, in invoke interpreter.wrap(interpreter.file, (file, name)) File "/usr/bin/empy", line 2273, in wrap self.fail(e) File "/usr/bin/empy", line 2264, in wrap apply(callable, args) File "/usr/bin/empy", line 2337, in file self.safe(scanner, done, locals) File "/usr/bin/empy", line 2379, in safe self.parse(scanner, locals) File "/usr/bin/empy", line 2399, in parse token.run(self, locals) File "/usr/bin/empy", line 1410, in run interpreter.execute(self.code, locals) File "/usr/bin/empy", line 2576, in execute exec statements in self.globals File "<string>", line 17, in <module> File "/usr/lib/pymodules/python2.7/catkin_pkg/topological_order.py", line 109, in topological_order return topological_order_packages(packages, whitelisted=whitelisted, blacklisted=blacklisted, underlay_packages=dict(underlay_packages.values())) File "/usr/lib/pymodules/python2.7/catkin_pkg/topological_order.py", line 154, in topological_order_packages return [(path, package) for path, package in tuples if package.name not in underlay_decorators_by_name] AttributeError: 'str' object has no attribute 'name' CMake Error at /opt/ros/hydro/share/catkin/cmake/safe_execute_process.cmake:11 (message): execute_process(/home/incubed/migrate_ws/build/catkin_generated/env_cached.sh "/usr/bin/empy" "--raw-errors" "-F" "/home/incubed/migrate_ws/build/catkin_generated/order_packages.py" "-o" "/home/incubed/migrate_ws/build/catkin_generated/order_packages.cmake" "/opt/ros/hydro/share/catkin/cmake/em/order_packages.cmake.em") returned error code 1 Call Stack (most recent call first): /opt/ros/hydro/share/catkin/cmake/em_expand.cmake:23 (safe_execute_process) /opt/ros/hydro/share/catkin/cmake/catkin_workspace.cmake:28 (em_expand) CMakeLists.txt:63 (catkin_workspace) I'm on Ubuntu 12.04 and ROS Hydro. Any ideas how I could fix this? Originally posted by Mate Wolfram on ROS Answers with karma: 76 on 2014-03-11 Post score: 5
Hi all, I wonder in which frame the interactive_marker.pose field is expressed? It seems relative to interactive_marker.header.frame_id but even considering that hypothesis, I get a strange behaviour. Let me explain... I'm writing an application were several parts of furniture (leg, back of a chair...) are represented in RViz has interactive markers. Each part is relative either to the world or to another part and all of them publish their transform with a TransformBroadcaster. view_frames gives: view_frames result http://cjoint.com/14ma/DClneaiqh6K_view_frames.png It seems that InteractiveMarker.pose designates the coordinates of the IM with respect to the frame InteractiveMarker.header.frame_id so when I set a new frame_id for an existing object I recompute the InteractiveMarker.pose with respect to the new frame_id, so that the pose doesn't move in the world frame. This works well, the new coordinates are computed with respect to the new frame thanks to a tfListener and there are correct. When I move the parent frame with RViz, the child frame also moves because the pose of the child doesn't change with respect to the parent and this is exactly want I want. Except that, when I click on the child object it's suddenly teleported elsewhere. I have disabled all callbacks for the Interactive Marker, so the server does that on its own But I don't know why. The transformation applied to the object is not random, it is exactly the transformation from its parent to the world. Here is as exemple, I start with the interactive marker "leg1" at x=0 y=0 z=5 with respect to /world. I set it relative to /sitting which is at 0 0 -1 with respect to /world. The coordinates of "leg1" with respect to the world are computed by a tfListener and are 0 0 6, so I give these coordinates to leg1.pose.position = (0, 0, 6). Then when I click on "leg1" without moving it, it's teleported to 0 0 5 relative to /sitting, and if I click again 0 0 4, 0 0 3, and so on. [ INFO] [1394553885.833971573]: leg1 is at 0 0 5 relative to /world [ INFO] [1394553885.834646868]: [callback] leg1 set relative to sitting which is at 0 0 -1 relative to /world [ INFO] [1394553885.848262357]: leg1 0 0 6 relative to /sitting [ INFO] [1394553885.845737564]: [callback] leg1 is clicked (but not moved and no code is executed) [ INFO] [1394553886.719846935]: leg1 0.10338 0.06199 6.96599 relative to /sitting (so almost 0 0 7) It sounds like I forgot to set a "frame_id" somewhere to avoid the server to apply this automatic pose transformation. But I don't know where because if I server->get("leg1") the frame_id is actually set to "sitting". I have been tearing my hear out on this issue for one week, but it's still unsolved and I don't know what to do with that :( I hope my explanations are understandable. All clues are welcome, many thanks by advance for your help. Originally posted by courrier on ROS Answers with karma: 454 on 2014-03-11 Post score: 0
Hi I am following this tutorial to use numpy with rospy http://wiki.ros.org/rospy_tutorials/Tutorials/numpy when I am running rosrun numpy_tutorial numpy_listener.py I am getting the following error terminate called after throwing an instance of 'rospack::Exception' what(): error parsing manifest of package numpy_tutorial at /home/tanmay/catkin_ws/numpy_tutorial/package.xml any suggestions what I might be doing incorrectly? Thanks, TM Originally posted by nicobari on ROS Answers with karma: 86 on 2014-03-11 Post score: 0 Original comments Comment by s1 on 2015-09-22: Im getting an error: [rospack] Error: package 'numpy_tutorial' not found
I'm in the process of running catkin_make, but right before make is successful, I am receiving the following error with a gradle plugin: :gradle_plugins:generatePomFileForMavenJavaPublication :gradle_plugins:compileJava UP-TO-DATE :gradle_plugins:compileGroovy UP-TO-DATE :gradle_plugins:processResources UP-TO-DATE :gradle_plugins:classes UP-TO-DATE :gradle_plugins:jar UP-TO-DATE :gradle_plugins:publishMavenJavaPublicationToMavenRepository Uploading: org/ros/rosjava_bootstrap/gradle_plugins/0.1.18/gradle_plugins-0.1.18.jar to repository remote at file:/opt /ros/hydro/share/maven/ Transferring 91K from remote /opt/ros/hydro/share/maven/org/ros/rosjava_bootstrap/gradle_plugins/0.1.18/gradle_plugins-0.1.18.jar (Permission denied) :gradle_plugins:publishMavenJavaPublicationToMavenRepository FAILED FAILURE: Build failed with an exception. What went wrong: Execution failed for task ':gradle_plugins:publishMavenJavaPublicationToMavenRepository'. Failed to publish publication 'mavenJava' to repository 'maven' Error deploying artifact 'org.ros.rosjava_bootstrap:gradle_plugins:jar': Error deploying artifact: PUT request to: org/ros /rosjava_bootstrap/gradle_plugins/0.1.18/gradle_plugins-0.1.18.jar in remote failed Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. BUILD FAILED Total time: 2.368 secs make[2]: *** [rosjava_bootstrap/CMakeFiles/gradle-rosjava_bootstrap] Error 1 make[1]: *** [rosjava_bootstrap/CMakeFiles/gradle-rosjava_bootstrap.dir/all] Error 2 make: *** [all] Error 2 Invoking "make" failed Since the error mentioned permission being denied to the folder, I tried changing the permissions. I even used "chmod -R 777 filename(s)" but when I used catkin_make, I still received the same error. Are there any possible workarounds or fixes for this issue? Any guidance would be greatly appreciated. Originally posted by musik on ROS Answers with karma: 19 on 2014-03-11 Post score: 0
$ sudo apt-get install ros-hydro-cram-core Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package ros-hydro-cram-core Originally posted by ratneshmadaan on ROS Answers with karma: 71 on 2014-03-11 Post score: 0
\knowrob.org/doc/reasoning_about_objects ratneshmadaan@ratneshmadaan-Inspiron-N5010:/$ roscd knowrob_basics_tutorial ratneshmadaan@ratneshmadaan-Inspiron-N5010:~/rosbuild_ws/stacks/knowrob_tutorials/knowrob_basics_tutorial$ 3 rd line(error): ?- owl_parse('owl/ccrl2_map_objects.owl', false, false, true). ERROR: toplevel: Undefined procedure: owl_parse/4 (DWIM could not correct goal) Originally posted by ratneshmadaan on ROS Answers with karma: 71 on 2014-03-11 Post score: 0
Is there a way to clean a certain package with catkin_make? Originally posted by Stopfer on ROS Answers with karma: 112 on 2014-03-11 Post score: 1
For our robot we had a Gazebo model built in SDF. The real robot has two unpowered omniwheels. In the SDF model, these omniwheels were modelled as a ball joint: a sphere with the radius of the omniwheel and two axes of rotation around the X and the Y axis. This worked perfectly and allowed the robot to move freely in simulation as it does on the real robot. SInce we are now trying to integrate with MoveIt, the need has risen to use URDF instead of SDF. However, the revolute2 joint type is not supported by URDF, for reasons unclear to me. The alternative that is usually suggested is to use two chained revolute joints by the use of a virtual link. First of all, the virtual link needs physical properties: if it does not have a part with a element, it will just be completely omitted by gzsdf. What physical properties can I attribute to a non-existing virtual joint? Then, if I just make up some value for mass and set up the joint to have 2 chained axes of rotation, it kind-of works. But not completely, and not equal to how the revolute2 joint did. The problem is that whenever a movement is initiated in a direction that does not align with one of the two chained axes of the omniwheels, it will behave like a caster-wheel: it will move sideways a bit until it is in the appropriate orientation. This means unpredictable movement that does not occur on the real robot. This also did not occur when using the revolute2 joint types in Gazebo. So now I'm stuck. I really do not feel like maintaining an SDF and an URDF version of the model. However, URDF seems to have discarded revolute2 as a valid joint type (even if it is directly supported by the SDF format to which it converts, which is the only thing that I want as the only thing that matters here is avoiding the friction or imbalance by fixing or leaving out the omniwheels). One possible, but very complex method would be to make an accurate model of the omniwheels and add a separate joint for each roller. However, this would result in an additional 40 joints (2 2-layered wheels with 10 rollers) and I doubt this will be beneficial for the performance of Gazebo. Any insights on how to fix this would be greatly appreciated. Originally posted by MadEgg on ROS Answers with karma: 43 on 2014-03-11 Post score: 0
Hello everyone, I'm using ROS with a non-Ubuntu linux distribution and hit unexpected bugs quite regularly. This is my newest discovery and it would be great to see this fixed one way or the other: rosbuild automatically links all libraries of modules listed in manifest.xml to rosbuild targets. This is obviously bloated, but it's fine in ubuntu, because they decided to make their gcc linker default to --as-needed instead of the official --no-as-needed and only reachable libraries are actually linked. Normally, this is not even a severe bug, but only adds more unwanted links.. However, gazebo decided to add a /usr/lib/gazebo-X.Y/plugins/ folder for some of their libs and this folder is only added to LD_LIBRARY_PATH when gazebo is actually used. In my case, we have a launch-file-only dependency to gazebo. This adds a lot of gazebo libs to all binaries of the module, but doesn't add the gazebo environment at build time. Without --as-needed this produces a linking error because the gazebo libs require libs in gazebo's plugin folder. Long story short(er): It seems like at least rosbuild assumes that gcc uses --as-needed which is a non-official default. Although I don't think these severe linking errors would appear with catkin, I'd say catkin assumes this as well (and it does make sense to do that!). Otherwise people should probably be more careful when using the catkin_LIBRARIES variable. Is it possible to officially add this LDFLAG to the standard cmake framework for rosbuild/catkin? -- EDIT -- I ask this question here instead of talking to a ROS package manager directly, because I got absolutely no idea in which package this would have to be added. Any ideas? Originally posted by v4hn on ROS Answers with karma: 2950 on 2014-03-11 Post score: 2 Original comments Comment by ahendrix on 2014-03-11: I'm confused; are you using rosbuild or catkin? Comment by v4hn on 2014-03-11: The module which produces the gazebo error I explained is based on rosbuild. But the underlying problem of heavily overlinking things built within ROS also applies to catkin. There, one uses catkin_LIBRARIES, which usually gives you more than you need for a single target in a larger module.
Dear ROS Supporters, Is there developer documentation for TCPROS connection header probe attribute for discovery? Thanks, Aaron I believe probe was used by rossrv for introspecting service type information that is not available via the XMLRPC API. This made me wonder what else is not documented. I've been working on a Java client for over a year, and this sort of information would of been like gold. My client did discovery of service types via an auto-generated shell script that is not platform independent, and required the user to manually run the script, then import the discovered data to the client. This data trace was performed in February 2014 on the latest version of ROS. The changelist at the following URL mentioned the probe... http://wiki.ros.org/ROS/ChangeList/1.2 The following is the Hex of a Wireshark data trace: 410000000e000000736572766963653d2f737061776e0700000070726f62653d311400000063616c6c657269643d2f726f7373657276696365080000006d643573756d3d2a The following is printable text of the above hex string: service=/spawn probe=1 callerid=/rosservice md5sum=* Originally posted by unknown_entity1 on ROS Answers with karma: 104 on 2014-03-11 Post score: 1
Hi! I'm trying to to use threading in RosAria.cpp, but I need to compile the cpp file using: "g++ thread.cpp -pthread -std=c++11 -Wl,--no-as-needed". How can I force rosmake to include the parameter "-pthread -std=c++11 -Wl,--no-as-needed" when compiling? I need to thread this way because I'm using cin to ask for user input, and the other thread needs to check sensor inputs and control the robot based on this. Cin is blocking/waiting for user input and can therefore not run on the same thread AFAIK. Originally posted by Mokona on ROS Answers with karma: 1 on 2014-03-11 Post score: 0 Original comments Comment by ReedHedges on 2014-03-14: You could also create a client node that gets the user input and perform the control, rather than modifying the rosaria driver node.
Hi, I'm using turtlebot on ROS electric on ubuntu 11.04. The openni-dev package was recently updated from version 1.1.0.41~natty to 1.3.2.1-4~natty2. After this update, the openni.launch process gives the error "No device connected.. Waiting for devices to be connected". Its not able to initialize. I tried to downgrade the openni-dev version using the command: "sudo apt-get install openni-dev=1.1.0.41~natty", but it says that the version was not found. I have also tried replacing the entries in the source.list files for apt to "old-releases.ubuntu.com", but it still didn't work. Any help would be appreciated. Thanks. Originally posted by Brijendra Singh on ROS Answers with karma: 68 on 2014-03-12 Post score: 0 Original comments Comment by Athoesen on 2014-05-14: You should be aware that ROS electric is quite old and a lot of the packages deprecated
Hello all! I am studying slam in a real indoor environment using a Pionner robot and a hokuyo laser. After getting a map i can see if the data association is correct or not but i would like to compare analitically the whole process. Gmapping has a publisher with the entropy over the robot pose. Does anyone do anything with the entropy? I´m trying to study the values of the entropy but it use to be constant. In case i not use the entropy, is there an easy way to get values of the uncertainty of the position? Thanks in advance Regards Originally posted by Josejgarcia on ROS Answers with karma: 11 on 2014-03-12 Post score: 1
Hello all, I have 2 robots erratic robots in gazebo and I want them to have approximately the same orientation. I am trying following piece of code but it doesn't seem to have any effect on my robot's rotation. tf::TransformListener listener; tf::StampedTransform transform; try { listener.waitForTransform("/odom", "/base_link", ros::Time(0), ros::Duration(3.0)); listener.lookupTransform("/odom", "/base_link", ros::Time(0), transform); } catch (tf::TransformException ex) { ROS_ERROR("%s", ex.what()); } tf::Quaternion rotation; rotation.setValue(robot2.orientation.x, robot2.orientation.y, robot2.orientation.z); transform.setRotation(rotation); Originally posted by SpiderRico on ROS Answers with karma: 35 on 2014-03-12 Post score: 0 Original comments Comment by jbinney on 2014-03-12: What do you mean "doesn't seem to be working"? Does it give a compilation error? Does the pose end up in the wrong place? How are you sending these transforms to gazebo? Comment by SpiderRico on 2014-03-12: Updated my question. Hope it's more clear now. Comment by atp on 2014-03-12: I think that you have to broadcast the transform again. Have a look at the "Writing a tf broadcaster" tutorial. Comment by SpiderRico on 2014-03-12: @atp How did I miss that! It helped me a lot. If you add this as an answer, I'll accept it mate. Thank you!
Hello. I'm now working on navigation stack in hydro. I want to use a global costmap to implement a new algorithm for path planning. So I created a node and subscribe to /move_base/global_costmap/costmap which is nav_msgs::OccupancyGrid type. The problem is this node receive costmap only one time when it come up and never update. In rviz everything is fine. Any idea ? Originally posted by noizpgt on ROS Answers with karma: 21 on 2014-03-12 Post score: 0
I'm trying to understand the whole velodyne_driver package and I have several doubts. I configure the network, connect the Velodyne and I check the data and I I see everything is well connected and configured (using rviz): roslaunch velodyne_pointcloud 32e_points.launch calibration:=/home/user/32db.yaml rostopic echo /Velodyne_points But if I run: velodyne_node, I see it is publishing in /velodyne_packets but I can not see the data because I get an error. rosrun velodyne_driver velodyne_node _model:=32E rostopic echo /velodyne_packets Traceback (most recent call last): File "/opt/ros/groovy/bin/rostopic", line 35, in rostopic.rostopicmain() ... IOError: [Errno 13] Permission denied: '/home/vplaza/.ros/rosdep/sources.cache/index' 1. Is it normal not seeing this data with rostopic echo? Is it because /velodyne_packets is an structure instead of integer values as /velodyne_points ? Is it because /home/vplaza/.ros/ does not exist? Some rostopic works fine: rostopic bw /velodyne_packets rostopic hz /diagnostic But this one returns the same error as before: rostopic hz /velodyne_packets 2. Is this error normal in /velodyne_packets? If I try to change the speed of the Velodyne I dont see the device is changing. I have try with 300 rpm and 10 rpm: rosrun velodyne_driver velodyne_node _model:=32E _rpm:=300 3. What can I do to change the velocity? Then I try to execute a velodyne test but I get an error: rostest velodyne_driver pcap_32e_node_hertz.test ... logging to /home/vplaza/.ros/log/rostest-vplaza-PClinux-6466.log Traceback (most recent call last): File "/opt/ros/groovy/bin/rostest", line 35, in rostestmain() File "/opt/ros/groovy/lib/python2.7/dist-packages/rostest/init.py", line 268, in rostestmain _main() File "/opt/ros/groovy/lib/python2.7/dist-packages/rostest/rostest_main.py", line 150, in rostestmain results_file = xmlResultsFile(pkg, outname, is_rostest) File "/opt/ros/groovy/lib/python2.7/dist-packages/rosunit/core.py", line 102, in xml_results_file raise IOError("cannot create test results directory [%s]. Please check permissions."%(test_dir)) IOError: cannot create test results directory [/home/vplaza/.ros/test_results/velodyne_driver]. Please check permissions. 4. How should I run the test? Is it because /home/vplaza/.ros/ does not exist? And also the manual "velodyne_driver" said the driverNodelet does the same process as the node velodyne_node. 5. What's the difference between the node and the nodelet? The topic published is the same /velodyne_packets. 6. If the nodelet is executed with roslaunch with the file nodelet_manager.launch , why the nodelet_velodyne.xml (linked in manifest.xml) shows the file lib > libdriver_nodelet.so instead of the launch file? How does it works? Originally posted by marilia15 on ROS Answers with karma: 104 on 2014-03-12 Post score: 0
Hi, I would like to ask if anyone has got to get the PrimeSense RD1.09 RGBD camera working with ROS when connected to a USB 3.0 port under Ubuntu. Right now I am using ROS Fuerte, Ubuntu 12.04.3 (Kernel 3.8.0-31) but cannot get the camera working on the USB 3.0 port (it does work when plugged into a USB 2.0 port). The camera is correctly recognised if I do lsusb. However, it shows the following error message when executing roslaunch openni_launch openni.launch: [ INFO] [1394633879.104789250]: Number devices connected: 1 [ INFO] [1394633879.105068439]: 1. device on bus 003:02 is a PrimeSense Device (601) from PrimeSense (1d27) with serial id '' [ INFO] [1394633879.106096703]: Searching for device with index = 1 [ INFO] [1394633879.521013406]: No matching device found.... waiting for devices. Reason: openni_wrapper::OpenNIDevice::OpenNIDevice(xn::Context&, const xn::NodeInfo&, const xn::NodeInfo&, const xn::NodeInfo&, const xn::NodeInfo&) @ /tmp/buildd/ros-fuerte-openni-camera-1.8.6/debian/ros-fuerte-openni-camera/opt/ros/fuerte/stacks/openni_camera/src/openni_device.cpp @ 61 : creating depth generator failed. Reason: Failed to set USB interface! [ERROR] [1394633880.027343854]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/compressedDepth/set_parameters] [ERROR] [1394633880.047435912]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/compressed/set_parameters] [ERROR] [1394633880.069100524]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/theora/set_parameters] terminate called after throwing an instance of 'openni_wrapper::OpenNIException' what(): unsigned int openni_wrapper::OpenNIDriver::updateDeviceList() @ /tmp/buildd/ros-fuerte-openni-camera-1.8.6/debian/ros-fuerte-openni-camera/opt/ros/fuerte/stacks/openni_camera/src/openni_driver.cpp @ 125 : enumerating image nodes failed. Reason: One or more of the following nodes could not be enumerated: Image: PrimeSense/SensorV2/5.1.2.1: Failed to set USB interface! Any hint on what I could do? Thanks! Edit: Sorry for the post duplicates, there seemed to be an error when I created it. Originally posted by Tinrik on ROS Answers with karma: 51 on 2014-03-12 Post score: 0 Original comments Comment by jbinney on 2014-03-12: I've got almost the same setup, but using openni2_camera: https://github.com/ros-drivers/openni2_camera It works mostly like openni_camera, so try that and see if it does the right thing. Comment by Tinrik on 2014-03-12: I'm absolutely new to ROS and I only know basically how to use openni_launch, nothing else. I have also installed hydro which comes with openni2-camera. Could you please tell me the commands to use openni2_camera? Thanks :)
?- visualisation_canvas(_). lights() is not available with this renderer. true. This has got something do to with processing, that I understand. Originally posted by ratneshmadaan on ROS Answers with karma: 71 on 2014-03-12 Post score: 0
How can I improve odom of a robot by a laserscan- what package can i use?I don't have static, global map. I found laser_scan_matcher, but I don't know if it's still supported. Maybe i should play with dynamic costmap and amcl? or maybe there is sth better? Thanks in advance Originally posted by BP on ROS Answers with karma: 176 on 2014-03-12 Post score: 1 Original comments Comment by jbinney on 2014-03-12: AMCL needs a static map to match against. I've not used laser_scan_matcher but it does look promising, and there's a hydro branch in the source repo. Is using a laser scanner your only option for improving odometry? Comment by BP on 2014-03-12: I think it is this option, but maybe i am wrong. What kind of improvement there exist (i know about IMU and this laser-scan-mach) Comment by jbinney on 2014-03-13: If you have a cmaera, visual odometry is pretty common (track visual features using cameras). Here's a ROS package that does visual odometry: http://wiki.ros.org/viso2_ros
I've been trying to build ROS Hydro from source on my laptop, running Ubuntu 13.10. However, I keep running into a compile error with openGL. ==> Processing plain cmake package: 'stage' ==> Building with env: '/home/alec/ros_catkin_ws/install_isolated/env.sh' Makefile exists, skipping explicit cmake invocation... ==> make cmake_check_build_system in '/home/alec/ros_catkin_ws/build_isolated/stage/install' ==> make -j8 -l8 in '/home/alec/ros_catkin_ws/build_isolated/stage/install' make[2]: *** No rule to make target `/usr/lib/x86_64-linux-gnu/libGLU.so', needed by `libstage/libstage.so.4.1.1'. Stop. make[1]: *** [libstage/CMakeFiles/stage.dir/all] Error 2 make: *** [all] Error 2 <== Failed to process package 'stage': Command '/home/alec/ros_catkin_ws/install_isolated/env.sh make -j8 -l8' returned non-zero exit status 2 Reproduce this error by running: ==> cd /home/alec/ros_catkin_ws/build_isolated/stage && /home/alec/ros_catkin_ws/install_isolated/env.sh make -j8 -l8 Command failed, exiting. Trying the commands given to reproduce the error gives the following output: alec@Ares:~/ros_catkin_ws$ cd /home/alec/ros_catkin_ws/build_isolated/stage && sudo /home/alec/ros_catkin_ws/install_isolated/env.sh make -j8 -l8 make: *** No targets specified and no makefile found. Stop. I've looked up the issue in a more general scope, not focused on ROS, and most of the solutions involved adding a missing symlink (though the most recent posts about a failure to compile with libGLU.so are dealing with Ubuntu 11, so it's not exactly up-to-date), but checking the file locations themselves showed that the symlinks were all there. I also tried purging and reinstalling the openGL drivers, which resulted in the same error. Thanks! Originally posted by Alec Thompson on ROS Answers with karma: 1 on 2014-03-12 Post score: 0
I'm compiling camera_info_manager on Ubuntu 13.10 (armhf) and receiving undefined reference to camera_calibration_parsers::readCalibration. See details below: $ catkin_make_isolated --pkg camera_info_manager --install .... ==> Processing catkin package: 'camera_info_manager' ==> Building with env: '/home/ilagi/ros_catkin_ws/install_isolated/env.sh' Makefile exists, skipping explicit cmake invocation... ==> make cmake_check_build_system in '/home/ilagi/ros_catkin_ws/build_isolated/camera_info_manager' ==> make -j4 -l4 in '/home/ilagi/ros_catkin_ws/build_isolated/camera_info_manager' [ 33%] Built target gtest [ 66%] Built target camera_info_manager Linking CXX executable /home/ilagi/ros_catkin_ws/devel_isolated/camera_info_manager/lib/camera_info_manager/unit_test /home/ilagi/ros_catkin_ws/devel_isolated/camera_info_manager/lib/libcamera_info_manager.so: undefined reference to `camera_calibration_parsers::readCalibration(std::string const&, std::string&, sensor_msgs::CameraInfo_<std::allocator<void> >&)' /home/ilagi/ros_catkin_ws/devel_isolated/camera_info_manager/lib/libcamera_info_manager.so: undefined reference to `camera_calibration_parsers::writeCalibration(std::string const&, std::string const&, sensor_msgs::CameraInfo_<std::allocator<void> > const&)' collect2: error: ld returned 1 exit status make[2]: *** [/home/ilagi/ros_catkin_ws/devel_isolated/camera_info_manager/lib/camera_info_manager/unit_test] Error 1 make[1]: *** [CMakeFiles/unit_test.dir/all] Error 2 make: *** [all] Error 2 <== Failed to process package 'camera_info_manager': Command '/home/ilagi/ros_catkin_ws/install_isolated/env.sh make -j4 -l4' returned non-zero exit status 2 What is funny that I remember that I had the same issue when I was compiling for Raspberry Pi and resolved it somehow, but it seems I can't remember how. Any idea is appreciated. Edit: camera_calibration_parses package is installed, that was the first thing I checked Edit2: just realized that last time I raised a ticket in github against camera_info_manager, but then the issue got resolved on its own, so I just reopen the ticket. See make verbose output on the link: https://github.com/ros-perception/image_common/issues/23 Originally posted by evk02 on ROS Answers with karma: 218 on 2014-03-12 Post score: 1 Original comments Comment by jbinney on 2014-03-12: It looks like camera_info_manager tries to link against camera_calibration parsers: https://github.com/ros-perception/image_common/blob/hydro-devel/camera_info_manager/CMakeLists.txt#L23 Can you add a "message()" to the cmake lists to check the value of ${camera_calibration_parsers_LIBRARIES}? Comment by evk02 on 2014-03-12: Thanks, that helped - see my own answer below.
I downloaded the libuvc_ros from here: uvc_cam.git and I go into the folder: $ make I got the following error message: build/rostoolchain.cmake .. CMake Error: The source directory "/home/ying/Downloads/NASA_object_identification/libuvc_ros" does not appear to contain CMakeLists.txt. Specify --help for usage, or press the help button on the CMake GUI. make: *** [all] Error 1 Another questions is how to use the camera driver to auto configure the image effect when the light / exposure changes? for example, in sunlight and in shade. like when in sunlight, the light gain will be decreased and in the shade, the light gain will be increased Originally posted by JennyLu on ROS Answers with karma: 1 on 2014-03-12 Post score: 0 Original comments Comment by evk02 on 2014-03-12: You should download it from git https://github.com/ktossell/libuvc_ros.git into your src folder and then from the catkin folder run catkin_make_isolated --pkg libuvc_camer --install Comment by jbinney on 2014-03-12: What ROS distro are you on? I think the last time i tried that package was on fuerte; i don't see branches in the repo for groovy or hydro Comment by JennyLu on 2014-03-12: @evk02 Thanks for the reply, that's what I used to download the source code from git. I am using groovy distro right now Comment by JennyLu on 2014-03-12: @jbinney I am using Groovy. I learned from here: http://wiki.ros.org/libuvc_camera And download the source from https://github.com/ktossell/libuvc_ros.git , which is said to be in master branch Comment by JennyLu on 2014-03-12: @evk02 Thanks for the update, I have no problem with uvc_camera at all, but I want to get access to the file where I can adjust the exposure parameters. Because right now when working outside with other software to take videos, my Logitech HD Pro Webcam C910 works perfect, but in ROS too bright
I just installed ros-hydro-stage-ros, but I can't seem to find roscd. Originally posted by SL Remy on ROS Answers with karma: 2022 on 2014-03-12 Post score: 0
My ubuntu linux is 12.04. And I want to install ROS, but when I followed the instruction in ros.org, in the step of "sudo apt-get install ros-groovy-desktop-full", I got this problem "Unable to locate package ros-groovy-desktop-full". And I tried other version of ROS such as hydro and diamondback and other version of ubuntu such as 12.10, this problem can not be solved. Originally posted by rosmichael on ROS Answers with karma: 1 on 2014-03-12 Post score: 0
hi Dear ros experts. I have problem ethzasl_ptam. I installted rqt but I received this message. how to fix it? ethzasl_ptam/rqt_ptam/src/rqt_ptam/remote_ptam.cpp:326:65: error: expected constructor, destructor, or type conversion at end of input Thank you. Originally posted by sungmok on ROS Answers with karma: 1 on 2014-03-13 Post score: 0
Does anyone know where I can get a copy of the bcf2000_driver? This was originally mentioned in the question here "answers.ros.org/question/33234/looking-for-a-usb-mixer-ros-interface/?answer=33304#post-id-33304", that points to the now deprecated "kforge.ros.org/sandbox/bcf2000_driver". Thanks, ioannis Originally posted by i.havoutis on ROS Answers with karma: 36 on 2014-03-13 Post score: 0
In Hydro, I create a launch file with the following to get data from my attached Kinect: It publishes a lot of stuff but not "/camera/depth_registered/image_rect". This worked fine with Groovy. Dave Originally posted by davevh on ROS Answers with karma: 36 on 2014-03-13 Post score: 0
Hello, Could somebody tell me how could I install the smash-viewer package in ROS-Hydro, please. I though that the package should be already in the las distribution but I have just installed Hydro full desktop and it isn't there. I also tried apt-get install ros-hydro-smach-viewer and nothing Thanks in advance, Roberto Originally posted by rober on ROS Answers with karma: 16 on 2014-03-13 Post score: 0
Hello, I know for I have find a question answered where someone was able to make two different version of ROS to communicate with each other. I can't seem to find the question again though. My computer runs groovy and I have a Pie with Fuerte on it. I'm wondering to which extent I can make then work together as I would like to use my computer for simulation and the Pie on the robot. If you ask I already tried to install groovy on Raspbian and it fails for libboost being a problem during the ROS install. I'll be willing to try to install groovy again or install Hydro as well if the communication between Groovy and Hydro is better. Thanks Originally posted by Maya on ROS Answers with karma: 1172 on 2014-03-13 Post score: 0
I launched amcl with turtlebot2, but turtlebot2 doesn't move. I have installed ROS hydro on my PC. When I launched amcl on groovy, my turtlebot2 moved correctly. I checked some topic with rostopic echo command. /navigation_velocity_smoother/raw_cmd_vel and /cmd_vel_mux/input/navi don't respond. /move_base_simple/goal outputs values. /mobile_base/commands/velocity outputs values when I touched bumper sensor. I launched a following command. roslaunch turtlebot_bringup minimal.launch (on turtlebot PC) ssh turtlebot@turtlebot(on workstation) roslaunch turtlebot_navigation amcl_demo.launch map_file:=/tmp/my_map.yaml(on workstation set to point to turtlebot PC) roslaunch turtlebot_rviz_launchers view_navigation.launch(on workstation PC) It has a warn. [Warn] Waiting on transform from base_footprint to map to become available before running costmap, tf error: if anybody correctly moves a turtlebot on hydro, I want you to teach me how to move. and I want any clue to solve this problem. Originally posted by Ken_in_JAPAN on ROS Answers with karma: 894 on 2014-03-13 Post score: 0 Original comments Comment by Ken_in_JAPAN on 2014-03-13: I can move a turtlebot2 with keyboard_teleop and ps3 joystick. Comment by Ken_in_JAPAN on 2014-03-13: If I hit a next message on a terminal in ssh turtlebot@turtlebot, my turtlebot went straight. $rostopic pub -1 /cmd_vel_mux/input/navi geometry_msgs/Twist -- '[0.3, 0.0, 0.0]' '[0.1, 0.0, 0.0]' Comment by Ken_in_JAPAN on 2014-03-18: As I guessed this problem parameter, I refered the following URL to repair my yaml. http://answers.ros.org/question/78196/sensor-raytrace-error-when-maps-set-to-voxel/ After that, a warning said "you must specify at least three points for the robot footprint reverting to previous footprint. Comment by Ken_in_JAPAN on 2014-03-18: This alarm was described on costmap_2d_ros.cpp. So, I added footprint to costmap_common_param.yaml. All warnings and error was gone on the terminal, but my turtlebot2 doesn't work!! I drag a mouse after pushing 2D Nav Goal on Rviz. Comment by Ken_in_JAPAN on 2014-03-18: I hit next command on my workstation with ssh turtlebot@turtlebot. $rostopic pub -1 /move_base_simple/goal geometry_msgs/PoseStamped '{header: {frame_id:"map"}, pose:{position: {x: 0.5, y: 0.0, z: 0.0}, orientation: {x: 0.0, y: 0.0, z: 0.0, w: 1.0}}}' My Turtlebot moved, but my problem not yet. Comment by Ken_in_JAPAN on 2014-03-18: I guessed that this problem was network configuration. if I hit rostopic pub -r10 /hello std_msgs/String "hello" on workstation and rostopic echo /hello on Turtlebot, The message " hello" doesn't begin printed. Does anybody know how to fix it? Comment by bit-pirate on 2014-03-23: @Ken_in_JAPAN For the next time, better post additional information by editing your initial question instead of multiple comments (hard to read).
Hi, we would like to simplify the dependencies installation process using rosdep. It deals easily with debian packages but we would like to use source code from svn too. In the answer 74404 says that "rosdep is not designed to pull packages from source" but we found ROSAria's libaria.rdmanifest and how they do that, even though it's a bit tricky, we have found a way to use svn. Anyway, Is there a way to add svn dependencies a bit more cleanly? We have found out that a tar.gz file is needed in "uri:", how can we get rid of it? Thanks in advance! Originally posted by martimorta on ROS Answers with karma: 843 on 2014-03-13 Post score: 0
Hello, I have a package that I'm developing in my computer using catkin workspace. To test some network issues, I need run a node from this package in other machines. I read the tutorial "Running ROS across multiple machines" but I don't know how deploy or copy my package to the remote machine. The tutorial uses a "pre-instaled" package (rospy_tutorials). I think it's a common task and should be simple to do. Thanks Originally posted by ricardoej on ROS Answers with karma: 85 on 2014-03-13 Post score: 0
Let's assume I have class cA, which has a node that subscribe from topic and publisher with a "constant" frequency. Class cA is a mamber of class cB. Where I should put "while(ros::ok())" for publishing purpose? Cose if i put it in constructor it'll bloc constructor of "higher" class (ofc. I can make a function cA::Publish() and run everything further in cB::Publish() or split classes to more nodes and classes, but i am looking for beter solution). Should I use threads? I have never used them so I don't know if i should. I found this cant_post_links_http://wiki.ros.org/roscpp/Overview/Callbacks%20and%20Spinning Is there an easy example? thanks in advance Originally posted by BP on ROS Answers with karma: 176 on 2014-03-13 Post score: 1
I'm trying to launch the navigation stack for the pr2. Here's what I run: rosrun map_server map_server basement_map.yaml roslaunch pr2_navigation_global rviz_move_base.launch (on the robot) roslaunch pr2_2dnav pr2_2dnav.launch After I launch the 2dnav on the robot, the robot model appears in rviz. After about 10 seconds the red markers for obstacles appear and rviz crashes immediately. Usually it doesn't give an error message but sometimes there's also this message: *** glibc detected *** /opt/ros/groovy/lib/rviz/rviz: free(): corrupted unsorted chunks: 0x000000000280e750 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x7eb96)[0x7fd273539b96] /opt/ros/groovy (there's a lot more text, here's the full error message: pastebin.com/ArwKkirr) Sometimes the message looks similar, but starts with "corrupted double linked list" or "malloc error". The same thing happens if I launch pr2_interactive_manipulation with navigation enabled (without navigation everything is fine). I'm using Ubuntu 12.04, Nvidia driver 331 (I also tried it with 304 and with a generic driver, same thing happens). Update: seems that the problem is caused by GridCells visualization. If I start with a fresh rviz ("rosrun rviz rviz") and add navigation-related topics one by one, rviz crashes after GridCells are added with a non-empty topic (for example, "/move_base_node/local_costmap/obstacles") Update 2: Another way for rviz to crash turned out to be adding DepthCloud with a non-empty DepthMap topic (for example, "/head_mount_kinect/depth_registered/image"). Originally posted by sonyaa on ROS Answers with karma: 23 on 2014-03-13 Post score: 1 Original comments Comment by demmeln on 2014-04-04: Just a longshot: are you possibly mixing different ROS versions? Or different versions of boost? Otherwise, could be a bug in the GridCells display and it might be worth filing a bug report. Comment by sonyaa on 2014-04-04: Thanks for the suggestion! The versions of ROS and boost are the same on the robot and the machine I'm running rviz from. I've also just discovered one more way to make rviz crash (see update).
I am trying to run turtlesim_node with hydro. I am running Ubuntu 12.10 with roscore running. Every time I try to run the node I get: /opt/ros/hydro/lib/turtlesim/turtlesim_node: symbol lookup error: /opt/ros/hydro/lib/turtlesim/turtlesim_node: undefined symbol: _ZN3ros7console5printEPNS0_10FilterBaseEPvNS0_6levels5LevelEPKciS7_S7_z Any help would be appreciated. Thanks, Morpheus Originally posted by Morpheus on ROS Answers with karma: 111 on 2014-03-14 Post score: 0
I would like to try running a CNC mill using ROS. More specifically I want to use RVIZ to help us visualize a CNC task prior to running it live on the CNC. My understanding is that one of the first step in this process is exporting my robot URDF using the SolidWorks URDF exporter. The exporter seems geared towards exporting revolute joints but doesn't seem to have any functionality to setup linear actuators / prismatic joints. Is this possible? If so what are the steps for specifying a linear actuator? If someone can point me the way I would love to write a tutorial about this. Here are a couple of example 3 axis gantries that use linear actuators similar to the one I am working with: GrabCAD: http://grabcad.com/library/damki-cnc-router-1 ShapeOKO: http://3dwarehouse.sketchup.com/model.html?id=f6ed9052b5de9a00dbd8f267ad6945df Originally posted by kscottz on ROS Answers with karma: 209 on 2014-03-14 Post score: 0 Original comments Comment by gvdhoorn on 2014-03-26: Not really an answer, but AFAIK the exporter should support setting up prismatic joints. Not sure about the quality / correctness of the exported URDF then though.
Hi there, I have installed ros-fuerte-desktop-full. I am trying to built a package that contains opencv code, but I am getting the following error: Cannot specify link libraries for target "MatObjects" which is not built by this project. I have added following to CMakeLists.txt FIND_PACKAGE( OpenCV REQUIRED ) target_link_libraries(MatObjects ${OpenCV_LIBS}) Here is the output of some flags: #echo $PKG_CONFIG_PATH /opt/ros/fuerte/lib/pkgconfig This path contains opencv.pc #pkg-config --cflags opencv -I/opt/ros/fuerte/include/opencv -I/opt/ros/fuerte/include #pkg-config --libs opencv -L/opt/ros/fuerte/lib -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab I have also tried to run the code on Geany using g++ with pkg-config --cflags opencv but it says Package opencv was not found in the pkg-config search path when it is there. Please help. Originally posted by Latif Anjum on ROS Answers with karma: 79 on 2014-03-14 Post score: 1
I have written a code to subscribe to the /camera/depth_registered/points topic to receive point cloud data from the kinect and then display it on a pcl::visualization::PCLVisualizer viewer. The problem is that, while successfully subscribed to the topic (as proven using the rostopic info command), the callback function is not actually being executed. I know this because, at the very beginning of the function, I've added a simple print statement confirming, "Entering callback." This statement is never printed. In fact, the while loop (see below) is never exited i.e. the boolean new_cloud_available_flag is never set to true. Here is the relevant section of the code: //Including all possible libraries #include <iostream> #include <cstdio> #include <cstdlib> #include <ros/ros.h> #include <pcl/console/parse.h> #include <pcl/point_types.h> #include <pcl/visualization/pcl_visualizer.h> #include <pcl/io/openni_grabber.h> #include <pcl/sample_consensus/sac_model_plane.h> #include <pcl/people/ground_based_people_detection_app.h> #include <pcl/common/time.h> #include <sensor_msgs/PointCloud2.h> #include <pcl_conversions/pcl_conversions.h> typedef pcl::PointXYZRGBA PointT; typedef pcl::PointCloud<PointT> PointCloudT; // PCL viewer // pcl::visualization::PCLVisualizer viewer("PCL Viewer"); // Mutex: // boost::mutex cloud_mutex; bool new_cloud_available_flag; enum { COLS = 640, ROWS = 480 }; PointCloudT::Ptr cloud(new PointCloudT); void callback(const sensor_msgs::PointCloud2ConstPtr& msg){ std::cout << "Entering callback"; cloud_mutex.lock(); sensor_msgs::PointCloud2 msg0 = *msg; //Changing pointer to actual cloud object PointCloudT cloud0; pcl::fromROSMsg(msg0, cloud0); //Converting sensor_msgs::PointCloud2 to PointCloudT *cloud = cloud0; new_cloud_available_flag = true; std::cout << "Converted" << endl; cloud_mutex.unlock(); } int main (int argc, char** argv) { ros::init(argc, argv, "subscriber_node"); ros::NodeHandle n; new_cloud_available_flag = false; ros::Subscriber sub = n.subscribe<sensor_msgs::PointCloud2>("/camera/depth_registered/points", 1000, callback); // Subscribe to kinect: std::cout<<"initiate reading\n"; ros::Subscriber sub = n.subscribe<sensor_msgs::PointCloud2>("/camera/depth_registered/points", 50, callback); // Wait for the first frame: while(!new_cloud_available_flag) { ros::Time t0 = ros::Time::now(); while(ros::Time::now()-t0 < ros::Duration(0.001)); std::cout<<"waiting for first frame\n"; } std::cout<<"got first frame\n"; //More code later... Originally posted by oswinium on ROS Answers with karma: 105 on 2014-03-14 Post score: 0
Hi all, I am trying to install ROS Hydro through Homebrew on my laptop with mac os x 10.9.2. The process of solving the dependencies seems to be not easy, and I not able to complete this phase due to this error: $ rosdep install --from-paths src --ignore-src --rosdistro groovy -y WARNING: Metapackage "diagnostics" must buildtool_depend on catkin. Error: No available formula for uuid Error: No available formula for tango-icon-theme executing command [brew install uuid] Error: No available formula for uuid Searching taps... ERROR: the following rosdeps failed to install homebrew: command [brew install uuid] failed What is uuid? Can someone help me please? Thanks in advance, Andrea Originally posted by -Andrew- on ROS Answers with karma: 3 on 2014-03-14 Post score: 0
I have two classes, image publisher and image processor. Both classes start internal boost::threads for camera and sensor drivers, image processing worker threads. Both classes take long in their constructor, initializing cameras, sensor (RIIA -> image publisher), self calibration, initializing GPU (image processor). Both run nice and smooth if they are in separate nodes, where the main just calls ros::init, creates a boost::shared_ptr with class instance, and then ros::spin()s: int main( int argc, char** argv ) { ros::init( argc, argv, "name" ) boost::shared_ptr<ImageProcessor> lp_proc = boost::shared_ptr<ImageProcessor>( new ImageProcessor() ); ros::spin(); return 0; } Again everything runs nice and smooth in nodes. But both classes heavily interact (exchanging images at 100 Hz) so I'd like to put them in nodelets. I just added an boost::shared pointer to the inheriting class and create the instance in onInit() look like: class MyNodeletClass : public nodelet::Nodelet { private: boost::shared_ptr<ImageProcessor> lp_proc; public: virtual void onInit() { lp_proc = boost::shared_ptr<ImageProcessor>( new ImageProcessor() ); } }; However, if in a nodelet the nodes occasionally crash with memory corruptions, failed service calls, etc. Is there something to be very aware of when putting nodes in nodelets? Or maybe is there a timeout for the onInit until which a nodelet after loading has to get responsive? Might be a problem as my constructors occasionally take quite long? Originally posted by Wolf on ROS Answers with karma: 7555 on 2014-03-14 Post score: 0
Hi everyone. I sincerely apologize if my question sounds stupid or has been answered before (I searched the forum and have read the tutorials but didn't find my answer). I'm currently building a URDF file for our robot using xacro. What I've noticed is that the origin of the frame of reference of every link is in the center of that link. For example when modeling hands, the tf will point to the center of the hand. I wanted to change this so tf frames will represent joints of hands and other parts of the robot, and not the middle of link. How is that possible to do? Btw the current robot model is: <?xml version="1.0"?> <robot xmlns:xacro="----KARMA INSUFFICIENT" name="my_robot"> <!-- base cylinder properties --> <xacro:property name="cyl_radius" value=".5"/> <xacro:property name="cyl_height" value=".3"/> <xacro:property name="cyl_buttom" value=".2"/> <!-- NOTE: not used currently --> <!-- wheel properties --> <xacro:property name="wheel_height" value=".2"/> <xacro:property name="wheel_dist_to_center" value=".2"/> <!-- laserscanner properties --> <xacro:property name="laserscanner_dist_to_center" value=".2"/> <xacro:property name="laserscanner_height" value=".2"/> <!-- body properties --> <xacro:property name="body_dist_to_center" value=".35"/> <xacro:property name="body_height" value="1.5"/> <xacro:property name="body_width" value="0.07"/> <!-- shoulder cylinder properties --> <xacro:property name="shoulder_to_body_distance_x" value=".15"/> <xacro:property name="shoulder_rec_size" value=".07"/> <xacro:property name="shoulder_length" value=".5"/> <xacro:property name="shoulder_min_height" value=".8"/> <xacro:property name="shoulder_max_height" value=".2"/> <xacro:property name="max_shoulder_leveler_velocity" value=".2"/> <xacro:property name="max_shoulder_leveler_effort" value=".2"/> <!-- hands properties --> <xacro:property name="upper_arm_length" value=".5"/> <xacro:property name="forearm_length" value=".2"/> <xacro:property name="upper_arm_min_angle" value=".2"/> <xacro:property name="upper_arm_max_angle" value=".2"/> <xacro:property name="forearm_min_angle" value="-4"/> <xacro:property name="forearm_max_angle" value="4"/> <xacro:property name="arm_max_effort" value=".2"/> <xacro:property name="arm_max_velocity" value=".2"/> <!-- Starting the robot definition --> <link name="base_link"> <visual> <geometry> <cylinder radius="${cyl_radius}" length="${cyl_height}"/> </geometry> <material name="blue"> <color rgba="0 0 .8 1"/> </material> </visual> <collision> <geometry> <cylinder radius="${cyl_radius}" length="${cyl_height}"/> </geometry> <origin xyz="0 0 0" rpy="0 0 0"/> </collision> </link> <joint name="base_to_body_joint" type="fixed"> <parent link="base_link"/> <child link="body"/> <origin xyz="${body_dist_to_center} 0 ${(cyl_height + body_height)/2}"/> </joint> <link name="body"> <visual> <geometry> <box size="${body_width} ${body_width} ${body_height}"/> </geometry> <material name="red"> <color rgba="1 0 0 1"/> </material> </visual> <collision> <geometry> <box size="${body_width} ${body_width} ${body_height}"/> </geometry> </collision> </link> <joint name="shoulder_leveler" type="prismatic"> <parent link="body"/> <child link="shoulder"/> <origin xyz="${shoulder_to_body_distance_x} 0 ${(shoulder_min_height + shoulder_max_height - body_height)/2}" rpy="0 0 0"/> <axis xyz="0 0 1"/> <limit lower="${shoulder_min_height}" upper="${shoulder_max_height}" effort="${max_shoulder_leveler_effort}" velocity="${max_shoulder_leveler_velocity}" /> </joint> <link name="shoulder"> <visual> <geometry> <box size="${shoulder_rec_size} ${shoulder_length} ${shoulder_rec_size}"/> </geometry> <material name="red_1"> <color rgba="0.5 0 0 1"/> </material> </visual> <collision> <geometry> <box size="${shoulder_rec_size} ${shoulder_length} ${shoulder_rec_size}"/> </geometry> </collision> </link> <xacro:macro name="hand" params="prefix reflect"> <joint name="${prefix}_shoulder_joint" type="revolute"> <parent link="shoulder"/> <child link="${prefix}_upper_arm"/> <axis xyz="0 1 0"/> <origin xyz="0 ${reflect * shoulder_length/2} ${-upper_arm_length/2}"/> <limit lower="${upper_arm_min_angle}" upper="${upper_arm_max_angle}" effort="${arm_max_effort}" velocity="${arm_max_velocity}" /> </joint> <link name="${prefix}_upper_arm"> <visual> <geometry> <box size="0.07 0.07 ${upper_arm_length}"/> </geometry> <material name="green"> <color rgba="0 1 0 1"/> </material> </visual> </link> </xacro:macro> <xacro:hand prefix="left" reflect="-1"/> <xacro:hand prefix="right" reflect="1"/> I apologize for the long post. Originally posted by Kourosh on ROS Answers with karma: 23 on 2014-03-14 Post score: 2
I am using the app_manager and three rapps. In one, the navigate rapp, I get this message when I try to use a service of my own. [ERROR] [WallTime: 1394827317.221882] [Client 0] [id: call_service:/app_manager/application/load_map_db:7] call_service InvalidServiceException: Service /app_manager/application/load_map_db does not exist why would the app manager tell me that this is non existant? I am running hydro on a ubuntu 12.04 system. I'm pretty sure it's there, in the srv folder there's a MapPublish.srv file, and in CMakeLists.txt there's an entry for the srevice in the 'add_service_files' section. Everything compiles right. Other services that Ive written work. What have I done wrong? I thought it was some kind of naming problem, but I think the names I've chosen are somewhat original. Any help would be appreciated. I can add more info if requested. EDIT 1: My project works like this: I have a turtlebot and I use an internet service to transmit direction info from one computer to another, then through the chrome web browser, websockets and rosbridge server to the turtlebot. I have several services that I've authored that help me out. So far that works. I have also tried to implement amcl and gmapping functions that I try to start and stop remotely. I use the app_manager and three rapps. One rapp is for teleop, one for gmapping, and one for amcl. I should provide documents showing some of these functions, but the idea is that the rosbridge server stuff tells the app_manager stuff to start and stop different launch files, allowing for this different functionality. This is why my service 'load_map_db' is preceded by the '/app_manager/application/'. What I don't do, is use the rocon launcher. Everything runs on the same port and there's no need to switch (I think they call it flip) from one port to another. I do use the app_manager 'start_app' and 'stop_app'. I thought briefly that the rosbridge server didn't know about the app_manager navigation services because they weren't started when the rosbridge server was started. This didn't pan out. I restarted the rosbridge server stuff as well as I could in javascript at the time that the load_map_db service was used and there was no change. I don't understand how some of the services work (like 'list_maps') but this one doesn't. Long ago you could have inserted a control character into a file or file name and not know it. Maybe I have done this? I suspect today's text editors will not let you do this. Below I include some source material. This is my launch file for the whole program: <launch> <arg name="rapp_lists" default="tele_presence/tele_presence_apps.rapps"/> <include file="$(find tele_presence)/launch/includes/app_manager_rocon.launch.xml"> <arg name="rapp_lists" value="$(arg rapp_lists)" /> </include> <include file="$(find rosbridge_server)/launch/rosbridge_websocket.launch"/> </launch> This is my rapps file: apps: - tele_presence/mapping - tele_presence/navigate - tele_presence/teleop This is the interface file for 'navigate': publishers: - scan - /tf - tf_changes - /tf_changes - map - amcl_pose - move_base/TrajectoryPlannerROS/global_plan subscribers: - mobile_base/commands/velocity - initialpose - /initialpose - move_base_simple/goal - /move_base_simple/goal - move_base/goal - /move_base/goal - /move_base/goal/goal services: - save_map - rename_map - delete_map - load_map_db - list_map EDIT 2: Some python: def init_fn(): global client, db, collection, grid, map_pub, meta_pub rospy.init_node('turtlebot_db', anonymous=True) map_pub = rospy.Publisher('map', OccupancyGrid, latch=True) meta_pub = rospy.Publisher('map_metadata', MapMetaData, latch=True) rospy.Service('load_map_db', MapPublish, map_load) rospy.spin() def map_load(req): # global map_pub, meta_pub # whole_map = MapWithMetaData() whole_map = collection.find_one({ 'info.map_id' : req.map_id }) if whole_map == None : return ['badmap'] ## ## SOME BORING STUFF HERE... ## map_pub.publish(oldmap) meta_pub.publish(oldmap.info) # return ['done'] Originally posted by david.c.liebman on ROS Answers with karma: 125 on 2014-03-14 Post score: 1 Original comments Comment by david.c.liebman on 2014-03-15: Should I add that I am using ROSBRIDGE SERVER? Does anyone follow questions about rosbridge server? I am using v.2, and the tutorial at this site: http://wiki.ros.org/roslibjs/Tutorials/BasicRosFunctionality Comment by Daniel Stonier on 2014-03-15: That's a runtime error. Probably it built the .srv quite fine, but the error message there is saying the runtime service does not exist at /app_manager/application/load_map_db does. Use rosservice list to figure out where your service is, or backtrack to figure out why your service did not start Comment by Daniel Stonier on 2014-03-15: You might want to add more info about how you're using the rosbridge server. Hard to say whether it is relevant or not given just that information. Comment by david.c.liebman on 2014-03-17: I have added my python (which may be the problem) to the original question. Comment by Daniel Stonier on 2014-03-17: Is it a case of not having used wait_for_service before calling the service? Comment by david.c.liebman on 2014-03-18: how does one 'wait_for_service' with rosbridge server? I googled to no avail. Comment by jihoonl on 2014-03-18: hm... seems like wait for service is not supported by roslibjs and rosbridge yet. I have opened an issue in roslibjs repo. We can continue to discuss there. https://github.com/RobotWebTools/roslibjs/issues/70 Comment by Daniel Stonier on 2014-03-18: You could probably verify that it is indeed the problem by just putting a sleep of some sorts in your program for n seconds before calling the service. If there's no error, then you know it's just your service client calling the server before it's been constructed on the other side. Comment by david.c.liebman on 2014-03-19: yes, in fact a ten sec wait removed the error. (on two tests so far.) Comment by Daniel Stonier on 2014-03-19: aha, bingo :)
Linking CXX executable /home/joao/catkin_ws/devel/lib/testbot_description/parser /opt/ros/hydro/lib/liburdf.so: undefined reference to `ros::console::print(ros::console::FilterBase*, void*, ros::console::levels::Level, char const*, int, char const*, char const*, ...)' /opt/ros/hydro/lib/liburdf.so: undefined reference to `ros::console::print(ros::console::FilterBase*, void*, ros::console::levels::Level, std::basic_stringstream<char, std::char_traits<char>, std::allocator<char> > const&, char const*, int, char const*)' collect2: ld returned 1 exit status make[2]: *** [/home/joao/catkin_ws/devel/lib/testbot_description/parser] Error 1 make[1]: *** [testbot_description/CMakeFiles/parser.dir/all] Error 2 make: *** [all] Error 2 Invoking "make" failed [ROS Hydro and Ubuntu 12.04] I am trying to follow the urdf parser tutorial. My testbot_description package has a /src/parser.cpp file and a /urdf/my_robot.urdf file Both are identical to the ones suggested by the tutorial page. (I already copied and pasted the code to make sure). This is my CMakeLists: cmake_minimum_required(VERSION 2.8.3) project(testbot_description) ## Find catkin macros and libraries ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz) ## is used, also find other catkin packages find_package(catkin REQUIRED COMPONENTS urdf) ## DEPENDS: system dependencies of this project that dependent projects also need catkin_package( # INCLUDE_DIRS include # LIBRARIES testbot_description # CATKIN_DEPENDS urdf # DEPENDS system_lib ) include_directories( ${catkin_INCLUDE_DIRS} ) add_executable(parser src/parser.cpp) target_link_libraries(parser ${catkin_LIBRARIES}) And this is my package.xml: <package> <name>testbot_description</name> <version>0.0.1</version> <description>The testbot_description package</description> <maintainer email="[email protected]">Joao Cicero</maintainer> <license>BSD</license> <buildtool_depend>catkin</buildtool_depend> <build_depend>urdf</build_depend> <run_depend>urdf</run_depend> <!-- The export tag contains other, unspecified, tags --> <export> <!-- You can specify that this package is a metapackage here: --> <!-- <metapackage/> --> <!-- Other tools can request additional information be placed here --> </export> </package> Originally posted by Joao Ferreira on ROS Answers with karma: 81 on 2014-03-14 Post score: 0 Original comments Comment by BennyRe on 2014-03-15: Please also post your package.xml Comment by Joao Ferreira on 2014-03-15: Thanks, BennyRe. I just added it. I hope it will help. Comment by Joao Ferreira on 2014-03-15: I just realized that when I comment the code that calls for initFile() I get rid of this error and I am able to build my package. In my parser.cpp , if I keep the " if (!model.initFile(urdf_file)) { ROS_ERROR("Failed to parse urdf file"); return -1; } " the error appears again.
Hi, i am sonny..i am currently working on arduino and with that i am trying to install ROS. While i am trying to run my first basic code on arduino with ROS, the Hello world code in the ros_lib, i am getting an error like below.. In file included from /opt/ros/hydro/include/ros/node_handle.h:31, from /home/traininghes/arduino-1.5.2/libraries/ros_lib/ros.h:38, from HelloWorld.pde:6: /opt/ros/hydro/include/ros/forwards.h:37: fatal error: boost/shared_ptr.hpp: No such file or directory compilation terminated. i am unable to verify the code since i don't understand the error. It will be appreciated if someone please look at this issue and help me to get out of this problem. Thanks.. Originally posted by sonny on ROS Answers with karma: 33 on 2014-03-15 Post score: 0
Hello, for some reason I'm unable to install this package, I'm using this command: sudo apt-get install -y ros-hydro-visp-tracker and this is the error I'm getting: executing command [sudo apt-get install -y ros-hydro-visp-tracker] Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package ros-hydro-visp-tracker ERROR: the following rosdeps failed to install apt: command [sudo apt-get install -y ros-hydro-visp-tracker] failed Originally posted by AMiNAsFour on ROS Answers with karma: 7 on 2014-03-15 Post score: 0 Original comments Comment by Maya on 2014-03-15: What is the error message ? And just in case you misspelled "hydro" Comment by AMiNAsFour on 2014-03-15: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package ros-hydro-visp-tracker ERROR: the following rosdeps failed to install apt: command [sudo apt-get install -y ros-hydro-visp-tracker] failed Comment by Maya on 2014-03-15: You should edit your question to add that kind of information. It'll be easier for everyone. Comment by AMiNAsFour on 2014-03-15: thanks, but do you have any idea how to solve the problem ?
It could be that I'm missing something, but does anyone know how to delete an attachment from a wiki page? Clicking the del link on the attachment page of the wiki page takes me back to the wiki page itself with an error admonition at the top stating: Please use the interactive user interface to use action AttachFile.del! Clear message I've found some posts to the moin-user list mentioning this same error message [1], but no solution / work-around I can use. Any ideas? Originally posted by gvdhoorn on ROS Answers with karma: 86574 on 2014-03-15 Post score: 2
I've encountered this error building ROS on OSX. As far as I can figure, this started happening after I upgraded my developer tools to the latest version (5.1): /usr/local/include/boost/atomic/detail/gcc-atomic.hpp:961:64: error: no matching constructor for initialization of 'storage_type' (aka 'boost::atomics::detail::storage128_type') explicit base_atomic(value_type const& v) BOOST_NOEXCEPT : v_(0) ^ ~ /usr/local/include/boost/atomic/detail/gcc-atomic.hpp:932:28: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'int' to 'const boost::atomics::detail::storage128_type' for 1st argument struct BOOST_ALIGNMENT(16) storage128_type ^ /usr/local/include/boost/atomic/detail/gcc-atomic.hpp:932:28: note: candidate constructor (the implicit default constructor) not viable: requires 0 arguments, but 1 was provided It looks like this is problem with some of the dependencies for TF or TF2, and is causing a large number of things to fail to build properly. Originally posted by ahendrix on ROS Answers with karma: 47576 on 2014-03-15 Post score: 1
Hi, I have installed ros-fuerte-desktop-full and with it opencv got installed. Opencv code is working inside any ros package that I create. However, some times I like to work outside ROS and have Geany installed. I have given the following set build command to geany for compilation g++ "%f" `pkg-config --cflags --libs opencv` -o "%e" When I try to build it says: Package opencv was not found in the pkg-config search path. Perhaps you should add the directory containing `opencv.pc' to the PKG_CONFIG_PATH environment variable. No package 'opencv' found. My PKG_CONFIG_PATH already contains opencv.pc file. Here is the output of some flags: #echo $PKG_CONFIG_PATH /opt/ros/fuerte/lib/pkgconfig: //this path contains opencv.pc #pkg-config --cflags --libs opencv -I/opt/ros/fuerte/include/opencv -I/opt/ros/fuerte/include -L/opt/ros/fuerte/lib -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab #pkg-config --libs opencv -L/opt/ros/fuerte/lib -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab Any help would be greatly appreciated. Originally posted by Latif Anjum on ROS Answers with karma: 79 on 2014-03-15 Post score: 0