1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Reinforcement learning for segmenting robot's path to reflect the true distances

Discussion in 'Education' started by user007, Oct 8, 2018.

  1. user007

    user007 Guest

    I've a grid of rectangles acting as blocks. The robot traverses through the inter-spaces between these consecutive blocks. Now I have sensor data streaming in representing Right and left wheel speeds. Based on the differences in the speeds of left and right wheels, I infer the robot's position and path it has threaded. I get the corresponding individual distances when it travels straight, left or right.

    These distances is the function of the actual speed of the robot and the time interval elapsed before the end of that activity. These computed distances though don't map well when projected on the grid layout of the environment. The distances are rather over-flowing the boundary limitations.

    I wanted to know if I can use RL to force the calculated distances to fit in with the layout given certain knowledge (or conditions, if you will):the start and end position of the robot and the inter-space distances.

    If not RL, do you know how can I solve this problem. I suspect my function computing the distances is off and wondering if RL can help me figure out the right mapping of sensor data to the path traveled adhering to the grid layout dimensions.

    Login To add answer/comment
     

Share This Page