Download dfgdfgfdrgdgwegwgHAHADHART and more Study Guides, Projects, Research Robotics and Autonomous Systems in PDF only on Docsity!
(Established under Karnataka Act No. 16 of 2013) 100-ft Ring Road, Bengaluru – 560 085, Karnataka, India
Dissertation on
“Autonomous Mobile Payload Carrier Robot”
Submitted by
Akshay S Nadig (01FB15EEE002)
Deekshitharai K (01FB15EEE013)
Tejas N (01FB15EEE043)
JAN - MAY 2019
Under the Guidance of
Dr. Venkatarangan M.J Associate Professor Department of Electrical and Electronics PES University, RR campus Bengaluru
FACULTY OF ENGINEERING
DEPARTMENT OF ELECTRICAL AND ELECTRONICS
B.TECH IN EEE
Dept. of Electrical & Electronics Engineering, PES University Page II
FACULTY OF ENGINEERING
DEPARTMENT OF ELECTRICAL AND ELECTRONICS
BACHELOR OF TECHNOLOGY
CERTIFICATE
This is to certify that the Dissertation entitled
“Autonomous Mobile Payload Carrier Robot”
Is a Bonafide work carried out by
Akshay S Nadig (01FB15EEE002)
Deekshitharai K (01FB15EEE013)
Tejas N (01FB15EEE043)
In partial fulfillment for the completion of 8th^ semester course work in the Program of Study
B.Tech in Electrical and Electronics Engineering (Embedded Systems) under rules and
regulations of PES University, Bengaluru during the period Jan. 2019 – May 2019. It is certified
that all corrections/suggestions indicated for internal assessment have been incorporated in the
report. The dissertation has been approved as it satisfies the 8th^ semester academic requirements
in respect of project work.
Name of Examiner: Signature with date
(Signature with date & Seal) Internal Guide Dr. Venkatarangan M.J Associate Professor Electrical and Electronics Engineering (Signature with date & Seal) Dr. B. K Keshavan Chairperson Electrical and Electronics Engineering (Signature with date & Seal) Dr. B. K Keshavan Dean of Faculty PES University
Dept. of Electrical & Electronics Engineering, PES University Page IV
ACKNOWLEDGMENT
We extend my deep sense of gratitude and sincere thanks to our chairman Dr. M.R.Doreswamy (Founder, Chancellor – PES University), Prof. Jawahar Doreswamy, Pro-Chancellor of PES University and Dr. K.N.Balasubramanya Murthy , the Vice Chancellor of PES University for giving me an opportunity to be a student of this reputed institution. It is our Privilege to thank Dean and Chairperson of the Department Dr. B. K Keshavan , Department of Electrical and Electronics Engineering for his support and guidance for doing my Project. We express our sincere gratitude to my guide Dr. Venkatarangan M.J for his valuable guidance and suggestion technically and logically for doing my Project work. We also express our gratitude to all our faculty members, parents and fellow mates who have helped us to carry out this work. AKSHAY S NADIG DEEKSHITHARAI K TEJAS N
Dept. of Electrical & Electronics Engineering, PES University Page V
ABSTRACT
Deployment of autonomous robots has been made in different application domains where the
robots can keep repeating the same tasks that are needed. By doing so, training and deployment
times are reduced as it can function 24/7 with reduction in human resources and leads to many
other such benefits.
The project work is an attempt to develop an autonomous navigation bot, capable of delivering
payload from intended source to desired destination after it has learnt the traversing path with
manual inputs. The robot will learn the necessary fixed path with a training mode that needs
manual inputs. And in the repeat mode, the same path is traversed. However, in case of any
anomaly in the decided path, the robot is able to sense the obstacle and take necessary actions
but this part is in progress. As part of the work, a robot is built using one of the existing chassis
but the hardware design is based on Arduino, motors with inbuilt encoder, IMU and ultrasound
sensors. The robot comprises of Bluetooth module interface for remote control of the robot in
learn mode using mobile applications. All the needed hardware and software components are
integrated to achieve the targeted applications with learn and repeat being one of them.
The robot built is tested for different use cases like traversing between the rooms in the same
floor of the building and so on. The bot was made to repeat a single taught path continuously
and observations were recorded and tabulated. After a series of tests we can infer that bot
repeats the path and each time it completes a iteration few centimetres of error was observed
in the position of right and left wheels with respect to the reference taught path.
Dept. of Electrical & Electronics Engineering, PES University Page VII
- 1 INTRODUCTION ABSTRACT V
- 1.1 BACKGROUND
- 1.2 MOTIVATION/PROBLEM STATEMENT
- 1.3 OBJECTIVES
- 1.4 TECHNIQUES FOR AUTONOMOUS NAVIGATION
- 1.4.1 LEARN AND REPEAT
- 1.4.2 OBSTACLE AVOIDANCE
- 1.5 REQUIREMENTS FOR PROJECT
- 2 LITERATURE SURVEY
- 2.1 AN AUTONOMOUS VISION-BASED MOBILE ROBOT:
- SINGLE-CAMERA VISION AND ULTRASONIC SENSING. 2.2 VISION-BASED NAVIGATION BY A MOBILE ROBOT WITH OBSTACLE AVOIDANCE USING
- 3 HARDWARE AND SOFTWARE ARCHITECTURE
- 3.1 HARDWARE ARCHITECTURE AND DESIGN
- 3.1.1 BLOCK DIAGRAM
- 3.1.2 CHASSIS
- 3.1.3 LOAD AND TORQUE
- 3.1.4 MOTORS AND MOTOR DRIVER
- 3.1.5 SENSORS
- 3.1.6 POWER CONSUMPTION:
- 3.2 SOFTWARE ARCHITECTURE AND DESIGN...................................................................................
- 3.2.1 SOFTWARE ARCHITECTURE DIAGRAM
- 3.2.2 STATE DIAGRAM
- 3.2.3 SEQUENCE DIAGRAM
- 3.2.4 BLUETOOTH MOBILE APP
- 4 SOFTWARE EXECUTION ARCHITECTURE
- 4.1 CONTROL LOOPS – PID - FLOWCHART
- 4.2 INTERRUPTS AND TIMERS
- 4.3 LEARN AND REPEAT IMPLEMENTATION - ALGORITHM
- 4.3.1 TIME BASED LEAN AND REPEAT
- 4.3.2 ENCODER BASED LEARN AND REPEAT
- 4.3.3 ENCODER AND COMPASS BASED LEARN AND REPEAT
- 4.3.4 ENCODER AND IMU BASED LEARN AND REPEAT
- 4.4 OBSTACLE AVOIDANCE USING ULTRASONIC SENSOR.
- 4.5 CAMERA BASED OBSTACLE DETECTION.....................................................................................
- 5 RESULT.....................................................................................................................................................
- 5.1 LEARNT VS REPEAT GRAPHS
- 5.1.1 ROUTE 1 - ANALOG TO OFFICE
- 5.1.2 ROUTE 2 - DIGITAL LAB TO AEC LAB
- 5.1.3 ROUTE 3 - CENTRE OF EXCELLENCE ROOM TO DIGITAL LAB
- 5.2 PID GRAPHS
- 5.3 TABLULATION
- 6 CHALLENGES/ PROBLEMS ENCOUNTERED
- 7 CONCLUSTION AND FUTRURE WORK
- 7.1 CONCLUSION
- 7.2 FUTURE WORK
- 8 BIBLIOGRAPHY
- FIGURE 1-1 AMAZON WAREHOUSE TABLE OF FIGURES
- FIGURE 2-1 THE TAUGHT-REFERENCE PATH AND REPEATED PATH AND POSITIONS AT WHICH CUES ARE PLACED
- FIGURE 2-2 FLOW OF THE SELF-LOCALIZATION PROCEDURE
- FIGURE 2-3 PROCESS OF POSITIONAL ALIGNMENT OF THE BOT
- FIGURE 2-4 PROCESS OF OBSTACLE AVOIDANCE
- FIGURE 3-1 BLOCK DIAGRAM
- FIGURE 3-2 HARDWARE COMPONENTS
- FIGURE 3-3 PICTURE OF THE ROBOT BUILT
- FIGURE 3-4 PICTORIAL REPRESENTATION OF COMPONENTS OF LOAD
- FIGURE 3-5 MOTOR WITH QUADRATURE ENCODER
- FIGURE 3-6 GRAPH OF OUTPUT
- FIGURE 3-9 CONNECTION REPRESENTATION
- FIGURE 3-10 MOTOR DRIVER
- FIGURE 3-11 HC-05
- FIGURE 3-12 MPU6050
- FIGURE 3-13 ULTRASONIC SENSOR
- FIGURE 3-14 CAMERA
- FIGURE 3-15 CURRENT USAGE OF ARDUINO + MOTOR
- FIGURE 3-16 POWER CONSUMPTION OF EACH COMPONENT...................................................................................
- FIGURE 3-17 SOFTWARE ARCHITECTURE
- FIGURE 3-18 DIRECTION DIAGRAM
- FIGURE 3-19 USER INTERFACE OF BLUETOOTH APP..............................................................................................
- FIGURE 4-1 PID FLOWCHART
- FIGURE 4-2 GRAPH OF WORKING OF ULTRASONIC SENSOR....................................................................................
- FIGURE 4-3 ULTRASONIC SENSORS
- FIGURE 4-4 IC
- FIGURE 5-1 ROUTE-ANALOG LAB TO OFFICE
- FIGURE 5-2 ROUTE-DIGITAL LAB TO AEC LAB
- FIGURE 5-3 ROUTE-COE TO DIGITAL LAB
- FIGURE 5-4 ZERO SETTLING GRAPH
- FIGURE 5-5 PID SETTLING WHICH MADE ROBOT GO STRAIGHT
Dept. of Electrical & Electronics Engineering, PES University Page 1
1 INTRODUCTION
1.1 BACKGROUND
Reliable transportation of goods has been a crucial practice since ancient times. Civilizations
have flourished and diminished on their ability to transport goods. Even in our modern times
the development of a nation and its advancement relies greatly on its transport system.
Thus, as time evolved, the methods to make this process easier were developed. With the
concept of automation, load transfer from one place to another with as little human interference
as possible has made this crucial transportation much easier. By doing so, numerous benefits
such as lesser time, faster approach, less manual work and other such have made the human life
simpler and easier.
1.2 MOTIVATION/PROBLEM STATEMENT
In today’s world, the main challenge is to reduce the workload on human by using automation
involving embedded systems with AI. Huge ecommerce and retail giant’s process hundreds of
items shipping every second during busy times, employees have to operate with machine-like
efficiency to keep things moving out of the door.
They need the most efficient way of ensuring that work gets done. An automated system ensures
greater levels of objectivity
Amazon uses its robots to carry stock around the expansive warehouse floors and group together
all the individual items needed for a specific order.
The main objective of the project is to build an autonomous navigation bot, capable of
delivering payload from intended source to respective destination without any manual inputs.
The robot will learn the necessary fixed path needed once via the manual intervention and
repeat the same.
Figure 1-1 Amazon Warehouse
Dept. of Electrical & Electronics Engineering, PES University Page 3
1.4.2 OBSTACLE AVOIDANCE
In growing areas of unmanned vehicles, obstacle avoidance has been a serious and concerning
constraint. Any algorithm involving the same is expected to be real time and robust. This is
because in real world cases there is bound to be some obstacles in the initially learnt route and
it would be expected for any deployable prototype to be able to address the same.
In our project, we intend to approach this issue with the use various sensor fusion without the
concept of machine learning. The bot is able to sense the presence of the anomaly and take
necessary action.
1.5 REQUIREMENTS FOR PROJECT
The robot is been built for carrying a load of approximately 25 Kg. The use cases for learning
are:
1) Users will be able to use mobile application and drive the robot from source to
destination.
2) They will be able to store the routes
3) The users will be able to get controls for maneuvering the robot forward, backward, left
and right.
At the end the learning, the users will be able to make the robot repeat using the mobile
application
Dept. of Electrical & Electronics Engineering, PES University Page 4
2 LITERATURE SURVEY
2.1 AN AUTONOMOUS VISION-BASED MOBILE ROBOT:
By E.T. Baumgartner and Steven B. Skaar, Dept. of Mech. Eng. & Eng. Mech., Michigan
Technol. Univ., Houghton, MI, USA, S. B. S
[1] Baumgartner, E. T., & Skaar, S. B. (1994). An autonomous vision-based mobile robot. IEEE Transactions on Automatic Control , 39 (3), 493- 502
This paper describes the theoretical development and experimental implementation of a
complete navigation for use in an autonomous mobile robot for structured environments.
Estimates of the vehicle’s position and orientation are based on the rapid observation of visual
cues located at discrete positions within the environment. The extended Kalman filter is used
to combine these visual observations with sensed wheel rotations to produce optimal estimates
continuously.
Reference paths are “taught” by manually leading the vehicle through the desired path in a
manner similar to the teaching of industrial holonomic robots.
Sensors used: Optical shaft encoders and Camera
Here they have used optical shaft encoders , which outputs the number of times the wheel has
rotated,using this estimate the robot's position will be calculated and this estimation is called
Dead-reckoning.
A single Camera has been used to visually detects the ring shaped cue and update the position
and orientation of the robot,which is originally based on encoder values.
The extended Kalman filters has been implemented based on visual observations to get a
minimum variance position and orientation.
LEARNING MODE:
Bot is manually led through the desired path to make it learn. As they have optical shaft
encoders on both the wheels , they are able to count the number of times the wheel has rotated,
using which they can estimate the position of the bot.But it doesnt take into account of slippage
Dept. of Electrical & Electronics Engineering, PES University Page 6
Figure 2-1 The taught-reference path and repeated path and positions at which cues are placed
Critics:
Learn and repeat has been implemented using encoder data and the extended Kalman
filters which is labour intense process.
They haven’t implemented any sensors like IMU to get the feedback during the path
deviation scenarios.
Obstacle avoidance is also not implemented which is a very essential feature of a mobile
robot.
Cues have been manually placed at discrete positions which is not preferred
2.2 VISION-BASED NAVIGATION BY A MOBILE ROBOT WITH OBSTACLE AVOIDANCE USING SINGLE-CAMERA VISION AND ULTRASONIC SENSING.
By Akihisa Ohya, Akio Kosaka, and Avinash Kak
[2] Ohya, I., Kosaka, A., & Kak, A. (1998). Vision-based navigation by a mobile robot
with obstacle avoidance using single-camera vision and ultrasonic sensing. IEEE
Transactions on Robotics and Automation, 14(6), 969-978.
This paper describes a vision-based navigation method in an indoor environment for an
autonomous mobile robot which can avoid obstacles.
In this method, the self-localization of the robot is done with a model-based vision system, and
nonstop navigation is realized by a retroactive position correction system.
Stationary obstacles are avoided with single-camera vision and moving obstacles are detected
with ultrasonic sensors.
Dept. of Electrical & Electronics Engineering, PES University Page 7
Estimates of the vehicle’s position and orientation are vision-based for the mobile robot that is
capable of simultaneously navigating and avoiding stationary obstacles using monocular
camera images.
The figure below shows the flow of the self-localization procedure. First, the robot renders an
expectation image using its current best estimate of where its present location is. Next, the
model edges extracted from the expectation image are compared and matched with the edges
extracted from the camera image through an extended Kalman filter. The Kalman filter
automatically then yields updated values for the location and the orientation of the robot.
Figure 2-2 Flow of the self-localization procedure
To illustrate the process of self-localization, in the figure below 2-3(a) shows a typical camera
image. Shown in Fig. 2-3(b) is an expectation image rendered from the wire-frame model of
the environment; this expectation map is overlaid on the camera image. We can see, the
discrepancy between the various edges in the underlying camera image and the highlighted
edges in the expectation map is caused by the error between where the robot actually is and
where the robot thinks it is.
Shown in Fig. 2-3(c) are the edges extracted from the camera image. Shown in Fig. 2-3(d) is a
re-projection into the camera frame of those model edges that were successfully used for self-
localization. The fact that these re-projected edges fall exactly where they should is a testimony
to the accuracy of the result produced by the Kalman filter. Although not discernible, shown in
Fig. 2-3(e) are two small icons, in close proximity to each other, the bright one corresponding
to the updated position and orientation of the robot and the darkened corresponding to the old
position and orientation. To help the reader discern these two icons, shown in Fig. 2-3(f) is an
enlarged version of the image in Fig. 2-3(e). by repeating the self-localization, the robot can
correct its position error and navigate autonomously toward its destination.
Dept. of Electrical & Electronics Engineering, PES University Page 9
Figure 2-4 Process of obstacle avoidance
Dept. of Electrical & Electronics Engineering, PES University Page 10
3 HARDWARE AND SOFTWARE ARCHITECTURE
3.1 HARDWARE ARCHITECTURE AND DESIGN 3.1.1 BLOCK DIAGRAM Figure 3-1 Block Diagram
HARDWARE
1. Chassis
1. Motors with Quadrature encoders
2. Motor Driver
3. IMU
4. Ultrasound sensor
Figure 3-2 Hardware Components
Dept. of Electrical & Electronics Engineering, PES University Page 12
Where,
T = torque of each motor in kg-m
m = total mass (includes chassis) in kg
θ = angle of inclination in degrees
N = number of motors
n = efficiency of motors
r = radius of the wheel in metres
We know that the torque required is, T=f*R.
Balancing the forces in the x-direction gives, ∑Fx=f-m*gx
Considering ‘N’ wheel drive and an efficiency ‘n’, we get the final equation to be,
T = (mr (a +gsin (θ))) / Nn
Taking,
r = 0.15 cm
a = 0.015 m/s
g = 9.8 m/s
N = 2
n = 0.
m = 35 Kg (10kg + 25 kg)
We get torque of each motor to be 6.7kg-cm.
By this we can conclude that if we choose two motors of rated torque 6.7Kg-cm each then our
mobile robot will be able to move with a payload of about 25 Kg and acceleration of 0.015m/s
3.1.4 MOTORS AND MOTOR DRIVER 3.1.4.1 MOTORS Figure 3-5 Motor with Quadrature Encoder
Dept. of Electrical & Electronics Engineering, PES University Page 13
3.1.4.1.1 MOTOR SPECIFICATIONS
Rated Torque 6.73kg-cm
Rated speed 60 rpm
Input voltage 12V
Rated current 0.9A
Rated power 7W
For further details on the motor refer vii in Bibliography.
3.1.4.1.2 QUADRATURE ENCODER
The above-mentioned motor comes with a quadrature encoder. It takes supply voltage between
3 to 20V and supply current of about 10mA.
Here the quadrature encoder works on Hall Effect. The output voltage is directly proportional
to the strength of the magnetic field through it.
The sensor is an analog transducer, which directly returns a proportional voltage. When we
know the magnetic field intensity, the distance from the plate can be determined. Using
collection of sensors, the distance from the magnet can be inferred.
Figure 3 - 6 Graph of Output