In this article, I will demonstrate how to create a self-driving car using Image Segmentation, OpenCV for navigation control in an outdoor/offroad environment. You might wonder why did I choose Segmentation over other methods? Well, that's because Segmentation can easily map a road and highlight the object of interest and we can ignore the rest.
What is a Raspberry Pi?
Raspberry Pi is a credit card-sized mini-computer that allows you to do computing on the go. It has all the features of a desktop/laptop i.e Wifi, Bluetooth, USB ports, HDMI ports, Stereo Audio jack, Camera interface, ethernet port etc. The only drawback of the Raspberry Pi is that it doesn't support CUDA cores(no GPU) and for the price it comes at I think we can ignore that one drawback.
Setting Up Raspberry Pi OS in Three Quick Steps
Things that you will need:
Installing CUDA and TensorFlow-GPU can be a very challenging task, in this article I will show to install it in a few simple steps.
Before getting started with this installation we need to make sure that your graphics card is CUDA enabled. If it is not then using GOOGLE COLLAB can be a great alternative.Check if your Graphics card is CUDA enabled(link), scroll down and click on CUDA enabled Ge-force and Titan Products. Search for your graphics card from the list.
If you don’t know your graphics card specs:
KNN is a supervised learning algorithm used for both regression and classification problems. Mostly used for Classification though. KNN tries to predict the correct class of test data by calculating the distance between the test data and all the training points. It then selects the k-points which are closest to the data.
2) Now, the k-NN algorithm calculates the distance between the test data and the given training data.
Pipelines are a container of steps, they are used to package workflow and fit a model into a single object. Pipelines are stacked on top of one another, taking input from one block sending output to the next block, the next block takes the input and gives out an output. Simple as that!
Imagine having a pipeline with two steps (i)Normalizer and (ii)LinearRegression model. The data will first be passed to the Normalizer block which will transform the data and send it to the LinearRegression model which will fit the model with the data from the Normalizer Block. The Linear…
LightGBM is a gradient boosting framework that uses tree-based learning algorithms. It is designed to be distributed and efficient as compared to other boosting algorithms. A model that can be used for comparison is XGBoost which is also a boosting method and it performs exceptionally well when compared to other algorithms.
However XGBoost is a good algorithm for datasets less than 10000 rows, for large datasets, it is not recommended. While LightGBM can handle a large amount of data, less memory usage, has parallel and GPU learning, good accuracy, faster training speed and efficiency. …
We all know the famous Linear Regression algorithm, it is probably the oldest known algorithm in the world used in statistics and other fields. If you are not familiar with Linear Regression, check out this article first as it will help you in understanding the concepts of Linear Regression with Gradient Descent much better.
A gradient is an increase or decrease in the magnitude of the property(weights). In our case, as the gradient decreases our path becomes smoother. …
Linear Regression is one of the oldest methods in the field of probability/statistics. It works by fitting the best fit line between dependent and independent variables. Let’s get familiar with some common terms.
In this article, I will demonstrate how to detect and remove outliers from your dataset. What is an outlier? In statistics, an outlier is a data point that differs significantly from other observations. Outlier Detection can only be applied to continuous features and it doesn't make much sense using it on categorical features. Before performing any outlier analysis make sure there are no missing values in your dataset.
Matplotlib provides many methods to visualize outliers in a dataset, like box-plot and scatter-plot(1-Dimension and 2-Dimensions)
import matplotlib.pyplot as plt
One of the first steps in data cleaning is to check for missing values. This is because missing data in the training dataset can reduce the power of a model or can lead to a biased model. After all, we failed to analyze the relationship with other variables correctly. In python, missing values are denoted as “NaN”.
Why is there missing value in my dataset?
MCAR(Missing Completely at Random): In this case, the chances of missing value to occur in a variable is the same for all observations.
MAR(Missing At Random): This is a case when the variable is missing…