reply = self._receiveCommandReply(82) socket.error: [Errno 104] Connection reset by peer
class Rover: def __init__(self): ''' Creates a Rover object that you can communicate with.''' self.HOST = '192.168.1.100' self.PORT = 80 TARGET_ID = 'AC13' TARGET_PASSWORD = 'AC13' self.TREAD_DELAY_SEC = 1.0 self.KEEPALIVE_PERIOD_SEC = 60 # Create command socket connection to Rover self.commandsock = self._newSocket() # Send login request with four arbitrary numbers self._sendCommandIntRequest(0, [0, 0, 0, 0]) # Get login reply reply = self._receiveCommandReply(82)
http.request.uri && ip.src == 192.168.1.101
(byte0 << 24) + (byte1 << 16) + (byte2 << 8) + byte3
http://192.168.1.1:80/ptz_control.cgi?param=17432837&command=100&pwd=&user=admin
Processor : ARM926EJ-S rev 5 (v5l) BogoMIPS : 95.02 Features : swp half fastmult edsp java CPU implementer : 0x41 CPU architecture: 5TEJ CPU variant : 0x0 CPU part : 0x926 CPU revision : 5 Hardware : W55FA93 Revision : 0000 Serial : 0000000000000000
0 = 2 <4>div0 = 3 <4>div0 = 4 <4>Div1 = 0, Div0 = 3 <4>USBH IP Reset <4>CONFIG_W55FA93_USB_HOST_LIKE_PORT1 <6>w55fa93-ohci w55fa93-ohci: Nuvoton W55FA93 OHCI Host Controller <6>w55fa93-ohci w55fa93-ohci: new USB bus registered, assigned bus number 1 <6>w55fa93-ohci w55fa93-ohci: irq 18, io mem 0xb1009000 <4>ohci_w55fa93_start <6>hub 1-0:1.0: USB hub found <6>hub 1-0:1.0: 2 ports detected <4>USB device plug in
sudo cat > /etc/apt/sources.list.d/ros-latest.list < apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net \ --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116 apt-get update - I installed this ROS version: Kinetic Kame, again just like AWS RoboMaker. - When recording bag files, always record the /tf (transform) topic as well, because without transform messages, ROS cannot know how to position your recorded data. Also record any other transform topics, just in case. - Add --clock to "rosbag play" commands to avoid having ROS ignore all replayed messages because they are too far back in time. --clock makes rosbag the source of the current time. --- #### Measuring the speed 4 ways to measure the speed: - Tape measure and timer _(lowest speed is about 0.45 m/s)_ - Use a **strobe app** to measure the rotational speed of the wheels, and multiply by wheel circumference _(should start at about 5.4 Hz, based off tape measure)_ - Use an **audio spectrum analyzer app** to detect the rotational frequency of the wheels, and multiply by wheel circumference - Use a **3d scanner** to route a realtime stream of point clouds into ROS as topic messages while the Rover travels through the scanner's field of view. ROS has easy facilities for capturing any stream of messages into a "bag" file for later examination. - Then, you can read the bag data into **PCL**, the widely used **PointCloud library**, and isolate the Rover by taking differences between the frames. - Finally, find the centroid of the remaining points for each frame (the remaining points are the Rover), and easily get the speed, because the frames are timestamped. --- #### Using a strobe app: Strobily on Android - Discovered that sending parameter values greater than ABOUT 10 _**did not**_ increase the rotational frequency! - Rotational frequency ranges from **5.5 Hz** for the parameter value **1**, up to about **9.3 Hz** for the parameter value **10**. - Wheel circumference is 3.25" giving speeds from **0.45 m/s** to **0.77 m/s** *(The frame rate of the gif is not high enough to accurately show the spinning)* --- #### Using a spectrum analyzer app: Spectrum Analyzer on Android 3 snapshots are shown above, but 10 were taken (1 for each parameter value from 1 to 10). Red graph is a rolling ~5 second max, green is instantaneous spectrum. Typing the "Peak" numbers into **LibreOffice Calc** gives... --- #### Using a spectrum analyzer app: massaging the data in LibreOffice - Apparently, judging by sounds, the max rotational frequency seems to be **1.4** times the min frequency. - But judging by strobe, the max rotational frequency seemed to be **1.7** times the min frequency. - The parameter values close to 10 produce very similar rotational frequencies. - Let's assume parameter value 1 causes a rotational frequency of 5.5 Hz as per the measurement by strobe. --- #### Using a spectrum analyzer app: curve fitting in LibreOffice Right clicking on the LibreOffice chart to add a trend line gives the equation shown: - Rotational frequency as a function of CGI parameter value x - **f(x) = 0.97 ln(x) + 5.6** (in Hz) _goes from 5.5 to 7.8 Hz_ - speed as a function of CGI parameter value x - **s(x) = 0.08255 * ( 0.97 ln(x) + 5.6 )** (in m/s) _goes from 0.46 to 0.65 m/s_ --- #### Using a 3d scanner: Tango ROS Streamer App for Android v 1.3.1 - Install the Tango ROS Streamer App onto a Tango capable phone - Configure it to talk to the roscore at 192.168.1.101 (the laptop) - On the laptop, start **roscore** and then record the data with rosbag: - **rosbag record /tf /tf_static /tango/point_cloud -O rover.bag** - Message topics /tf and /tf_static are required by rviz to orient the pointclouds --- #### Using a 3d scanner: primitive rover scan visualization with ROS RViz - After recording is done, play the recorded data back in a loop (-l) with rosbag - **rosbag play -l --clock rover.bag** _(without --clock, rviz rejects as too old)_ - Visualize it with rviz (subscribe to /tf, /tf_static, and /tango/point_cloud) - **rosrun rviz rviz** --- #### Using a 3d scanner: manipulating the rover scans with Python Using rosbag_to_ply.py, read the pointcloud messages into Python from the rosbag file and write each one out as an ASCII PLY file that many programs can read: ```python import sys, os, rosbag import sensor_msgs.point_cloud2 as pc2 bagfile_path = sys.argv[1] ply_topic = sys.argv[2] ply_dir = bagfile_path + ".ply.d" bag = rosbag.Bag(bagfile_path) if not os.path.isdir(ply_dir): os.mkdir(ply_dir) for msg_topic, msg, msg_time in bag.read_messages(topics=[ply_topic]): # Loop over all messages in bag for this topic ply_file_path = os.path.join(ply_dir, "%s.%s.ply"%(msg_topic.replace("/", "_"), str(msg_time))) with open(ply_file_path, "w") as ply_file: # Make a new PLY file for each message ply_file.write("ply\nformat ascii 1.0\nelement vertex %d\n"%msg.width) ply_file.write("property float x\nproperty float y\nproperty float z\nproperty float c\n") ply_file.write("end_header\n") for vertex in pc2.read_points(msg, skip_nans = True): # Loop over all vertices in message ply_file.write("%f %f %f %f\n"%vertex) # Write a new line for each vertex ``` Run it like this: **rosbag_to_ply.py   2018-04-01-13-35-34.bag   /tango/point_cloud** ...to make files named like this, where the number is the pointcloud timestamp: **2018-04-01-13-35-34.bag.ply.d/_tango_pointcloud.1522026206919802769.ply** --- #### Using a 3d scanner: lame rover scan visualization with CloudCompare Open the PLY files in the CloudCompare application _("snap install cloudcompare")_: It is very slow, and sadly it uses trackball style manipulators. Cumbersome. - I opened 42 pointclouds with about 20,000 points each. --- #### Using a 3d scanner: lame rover scan visualization with MeshLab Open the PLY files in the MeshLab application _("apt-get install meshlab")_: Also slow, and unfortunately with a trackball style manipulator again. - Also 42 pointclouds with about 20,000 points each. --- #### Using a 3d scanner: better rover scan visualization with Blender Open the PLY files in Blender: Much faster after loading the same files, plus it defaults to turntable navigation. Tabbing into edit mode and selecting a vertex on the rover shows it is 1.9 meters from the camera, which is correct. But there are too many other vertices besides rover vertices. Let's use the Point Cloud Library to remove them... --- ### PCL - Point Cloud Library - Like OpenCV but for 3D; started as a part of ROS, now independent - C++ native API, but partial Python API available from Strawlab - Modules for I/O, filters, feature extraction, visualization, partitioning by octree and kdtree (k-dimensional tree) - octrees are great for comparing pointclouds and finding differences - just what we need! - But the API for converting ROS PointCloud2 messages to PCL Pointclouds is not straightforward in Python Building the PCL library - Download release v0.3.0rc1 from https://github.com/strawlab/python-pcl/releases - tar xzvf python-pcl-0.3.0rc1.tar.gz - cd python-pcl-0.3.0rc1 - python setup.py build_ext -i - python setup.py install --- #### Loading PCL from a PLY file: ```python >>> import pcl >>> pc_from_ply = pcl.load('_tango_point_cloud.1522604147114629819.ply') >>> pc_from_ply.sensor_orientation , pc_from_ply.sensor_origin (array([ 1., 0., 0., 0.]), array([ 0., 0., 0., 0.], dtype=float32)) >>> pc_from_ply.size , pc_from_ply.height , pc_from_ply.width , pc_from_ply.is_dense (21040, 1, 21040, False) >>> pc_from_ply[0] (-0.8809239864349365, -0.9873319864273071, 2.2988979816436768) >>> numpy_to_array = pc_from_ply.to_array() >>> numpy_to_array array([[-0.86157399, -0.978643 , 2.27972507], ..., [ 0.088902 , 0.081311 , 0.186314 ]], dtype=float32) >>> len(numpy_to_array) 21040 >>> numpy_to_array.shape (21040, 3) >>> pc_from_array = pcl.PointCloud() >>> pc_from_array.from_array(numpy_to_array) >>> pc_from_array.sensor_orientation , pc_from_array.sensor_origin (array([ 1., 0., 0., 0.]), array([ 0., 0., 0., 0.], dtype=float32)) >>> pc_from_array.size , pc_from_array.height , pc_from_array.width , pc_from_array.is_dense (21040, 1, 21040, True) >>> pc_from_array[0] (-0.8615739941596985, -0.978643000125885, 2.2797250747680664) ``` --- #### Loading PCL from ROS messages: C++ has pcl::fromROSMsg(), not Python! ```python >>> import rosbag, pcl, numpy >>> bag = rosbag.Bag('2018-04-01-13-35-34.bag') ; get = bag.read_messages( topics=['/tango/point_cloud']) >>> msg_topic , msg , msg_time = get.next() ; msg.height, msg.width (1, 21040) >>> dtype_list = [ (f.name, numpy.float32) for f in msg.fields ] ; dtype_list [('x', ), ('y', ), ('z', ), ('c', )] >>> numpy_arr_fromstring_data = numpy.fromstring( msg.data, dtype_list ) ; numpy_arr_fromstring_data.shape (21040,) >>> numpy_arr_fromstring_data # We need to drop the color column 'c' array([(-0.8615744709968567, -0.9786433577537537, 2.2797253131866455, 0.5714285969734192), ..., (0.08890211582183838, 0.08131082355976105, 0.18631358444690704, 1.0)], dtype=[('x', '>> dropped_c = numpy.lib.recfunctions.rec_drop_fields(numpy_arr_fromstring_data, ['c']) ; dropped_c rec.array([(-0.8615744709968567, -0.9786433577537537, 2.2797253131866455), ..., (0.08890211582183838, 0.08131082355976105, 0.18631358444690704)], dtype=[('x', '>> view_array = dropped_c.view((dropped_c.dtype[0], len(dropped_c.dtype.names))) ; view_array rec.array([[-0.86157447, -0.97864336, 2.27972531], ..., [ 0.08890212, 0.08131082, 0.18631358]], dtype=float32) >>> view_array.shape , view_array[0] # (N, 3) is the right shape for PCL ((21040, 3), array([-0.86157447, -0.97864336, 2.27972531], dtype=float32)) >>> pc_from_array = pcl.PointCloud() ; pc_from_array.from_array(view_array) >>> pc_from_array.sensor_orientation , pc_from_array.sensor_origin (array([ 1., 0., 0., 0.]), array([ 0., 0., 0., 0.], dtype=float32)) >>> pc_from_array.size , pc_from_array.height , pc_from_array.width , pc_from_array.is_dense ( 21040, 1, 21040, True) >>> pc_from_array[0] (-0.8615744709968567, -0.9786433577537537, 2.2797253131866455) ``` --- #### Loading PCL: Which method is better? - **plys_to_pcls.py** loads from PLY files, and it reports: 64 pointclouds with an average of 21037 points per cloud Original memory size: 62246912 bytes Final memory size: 84439040 bytes Memory growth: 22192128 bytes Elapsed time: 3.30724596977 seconds (346752 bytes per pointcloud, 0.051676 seconds per pointcloud) Total available RAM: 2625716224 bytes 2 CPUS like this: Intel(R) Core(TM) i5 CPU 650 @ 3.20GHz - **rosbag_to_pcls.py   2018-04-01-13-35-34.bag   /tango/point_cloud** loads from ROS messages, and it reports: 64 pointclouds with an average of 21037 points per cloud Original memory size: 100044800 bytes Final memory size: 124366848 bytes Memory growth: 24322048 bytes Elapsed time: 0.080677986145 (380032 bytes per pointcloud, 0.00126059353352 seconds per pointcloud) Total available RAM: 2625716224 bytes 2 CPUS like this: Intel(R) Core(TM) i5 CPU 650 @ 3.20GHz - Loading straight from ROS messages is about 40 times faster, and uses nearly the same RAM per pointcloud. --- #### Using a 3d scanner: Identifying moving vertices between frames PCL octrees can easily identify points that changed between frames: PYTHON-PCL-SOURCE-DIR/examples/official/octree/octree_change_detection.py But adjacent frames that contain slow motion could mis-identify overlapping points from a moving object as being stationary, especially if octree resolution is grainy to overlook jitter. We must give time for the moving object to get out of the way. So, compare every frame to the nearest J adjacent frames (not just the immediately adjacent frames), and keep a per-frame, per-point tally of how many times each point was identified as a moving point. At the end, points that were never considered to be moving points are truly stationary points. After the NxJ octree comparisons, cull stationary points from each frame to be left with only moving points in each frame, then calculate centroid deltas across frames. --- I'm sorry, there should be more, but I got distracted by other fun stuff: - Pandas, Parquet, AWS Athena, AWS Cloud9 custom dialogs and menus, Jupyter, JupyterHub, Cufflinks, IPython SQLMagic... - statsmodels and matplotlib - Facebook's fbprophet forecasting library ... and ... --- I'm sorry, there should be more, but I got distracted by other fun stuff: - MayaVI - Jupyter and Cufflinks --- I'm sorry, there should be more, but I got distracted by other fun stuff: Plotly 3D charts of the AWS Athena PrestoDB product --- I'm sorry, there should be more, but I got distracted by other fun stuff: A year or so before the recent fantastic JupyterLab git plugins were available, I combined existing classic Jupyter git plugins along with my own: Resync Git Subtree, Update Git Subtree, and REVERT FILE to Git allow full or partial updates of a user's personal git working tree, which is configured to be at the beginning of the user's PYTHONPATH, allowing users to selectively override shared code with their own test code, before checking in changes to the shared code. DIFF two notebooks invokes nbdiff on two arbitrary notebooks that were selected with checkmarks in the Jupyter file tree. A 3rd history button was added to the standard nbdime buttons in the Jupyter notebook toolbar... ...which leads to the simple gitweb GUI that comes with git: --- I'm sorry, there should be more, but I got distracted by other fun stuff: IPyWidgets for N*N-way diffs between SQL schemas or AWS IAM policies across multiple sub-accounts: Using Python difflib edit distance (similar to Levenshtein distance) to color the icons on a ramp from 0.0 = green to 1.0 = red Clicking on a colored icon populates the diff display below the interactive 3D chart. --- I'm sorry, there should be more, but I got distracted by other fun stuff: Running ETL through Jupyter notebooks in batch mode on AWS spot instances under Airflow: Using the Papermill library with a custom engine to only execute cells with the tag "batch", each notebook gets executed as an Airflow Task on its own AWS EC2 spot instance.
apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net \ --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116 apt-get update
tar xzvf python-pcl-0.3.0rc1.tar.gz
cd python-pcl-0.3.0rc1
python setup.py build_ext -i
python setup.py install
64 pointclouds with an average of 21037 points per cloud Original memory size: 62246912 bytes Final memory size: 84439040 bytes Memory growth: 22192128 bytes Elapsed time: 3.30724596977 seconds (346752 bytes per pointcloud, 0.051676 seconds per pointcloud) Total available RAM: 2625716224 bytes 2 CPUS like this: Intel(R) Core(TM) i5 CPU 650 @ 3.20GHz
64 pointclouds with an average of 21037 points per cloud Original memory size: 100044800 bytes Final memory size: 124366848 bytes Memory growth: 24322048 bytes Elapsed time: 0.080677986145 (380032 bytes per pointcloud, 0.00126059353352 seconds per pointcloud) Total available RAM: 2625716224 bytes 2 CPUS like this: Intel(R) Core(TM) i5 CPU 650 @ 3.20GHz
PYTHON-PCL-SOURCE-DIR/examples/official/octree/octree_change_detection.py
A year or so before the recent fantastic JupyterLab git plugins were available, I combined existing classic Jupyter git plugins along with my own: Resync Git Subtree, Update Git Subtree, and REVERT FILE to Git allow full or partial updates of a user's personal git working tree, which is configured to be at the beginning of the user's PYTHONPATH, allowing users to selectively override shared code with their own test code, before checking in changes to the shared code. DIFF two notebooks invokes nbdiff on two arbitrary notebooks that were selected with checkmarks in the Jupyter file tree. A 3rd history button was added to the standard nbdime buttons in the Jupyter notebook toolbar... ...which leads to the simple gitweb GUI that comes with git: