{"id":871,"date":"2018-09-17T14:51:45","date_gmt":"2018-09-17T14:51:45","guid":{"rendered":"http:\/\/muthu.co\/?p=871"},"modified":"2021-05-24T03:01:17","modified_gmt":"2021-05-24T03:01:17","slug":"linear-regression-using-gradient-descent-algorithm","status":"publish","type":"post","link":"http:\/\/write.muthu.co\/linear-regression-using-gradient-descent-algorithm\/","title":{"rendered":"Linear Regression using Gradient Descent Algorithm"},"content":{"rendered":"\n

Gradient descent is an optimization method used to find the minimum value of a function by iteratively updating the parameters of the function. Parameters refer to coefficients in Linear Regression and weights in Neural Networks.<\/p>\n\n\n\n

In a linear regression problem<\/a>, we find a modal that gives an approximate representation of our dataset. In the below image, the red dots denote out dataset, the blue line is what we find using linear regression and the euclidean distance between the red dots and blue line is what we call the cost or the error. You can understand more about linear regression from my previous<\/a> post.<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

The equation of line is given by the formula,<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

where m<\/em> and b <\/em>are parameters. Using the gradient descent algorithm we will try to iteratively find the equation of line which produces the least error.  Take a look at the image on how gradient descent finds the line to understand the process visually.<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

Let (Xi<\/sub>, Yi<\/sub>) be our dataset, where i<\/em> is the index and N be the number of data points. The equation of line which can modal our data points is given by,<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

Our modal is not a perfect fit and has error which is the distance between the actual point and the predicted point. The error at each point is given by,<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

The total error is given by,<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

A more formal and widely used method for calculating the total error is Mean Squared Error<\/a>, which is basically the square of the difference in the actual value and predicted value, we square the difference to avoid negative results. For our analysis we would be using Mean Squared error.<\/p>\n\n\n\n

The MSE equation is given by,<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

which can also be written as,<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

As mentioned earlier, the goal of Gradient Descent is to minimize this equation. So the first step in the process is to find the partial derivatives of the MSE equation wrt. m<\/em> and b<\/em>.<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

Now, to find the values of m<\/em> and b using <\/em>gradient descent method, <\/em>we first start with both m<\/em> and b<\/em> with the value of 0. Then we iterate through our datapoints, and at each iteration we adjust the values of m<\/em> and b <\/em>using the learning rate and the gradient values. We keep doing it until we have reached our maximum iteration threshold also called epoch or when the cost is at its least. You can see the code below:<\/p>\n\n\n\n

learning_rate = 0.01\niterations = 10000\n\nfor i in range(iterations):\n    \n    #find the first derivative of the cost on a and b and minimize it d\/da and d\/db\n    da = 0\n    db = 0\n    for j in range(len(y)):\n        db += -(2\/number_of_datapoints) * (y[j] - ((a * X[j]) + b))\n        da += -(2\/number_of_datapoints) * X[j] * (y[j] - ((a * X[j]) + b))\n        \n    #adjust a and b\n    a = a - (learning_rate * da)\n    b = b - (learning_rate * db)\n    \n    new_cost = np.sum((y - regressed_y)**2)\/number_of_datapoints #MSE\n    \n    #find the convergence point\n    if np.sum(costs_reduction[-3:])\/costs_reduction[-1:] == 3:\n        print \"gradient descent convergence point reached at iteration : \" + str(i)\n        break<\/code><\/pre>\n\n\n\n

You can run the entire working source code here:<\/p>\n\n\n\n

Gradient descent implementation of Linear Regression<\/a><\/p>\n\n\n\n

A sample toy dataset implentation gave me the below results. You can see how around 100th iteration the gradient descent started converging towards its minimum.<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

Below graph shows how the errors came down and became consistent.<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n","protected":false},"excerpt":{"rendered":"

Gradient descent is an optimization method used to find the minimum value of a function by iteratively updating the parameters of the function. Parameters refer to coefficients in Linear Regression and weights in Neural Networks. In a linear regression problem, we find a modal that gives an approximate representation of our dataset. In the below […]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37,31],"tags":[49,62],"_links":{"self":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/871"}],"collection":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/comments?post=871"}],"version-history":[{"count":2,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/871\/revisions"}],"predecessor-version":[{"id":1883,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/871\/revisions\/1883"}],"wp:attachment":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/media?parent=871"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/categories?post=871"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/tags?post=871"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}