Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cartesian to polar program #101

Open
wants to merge 58 commits into
base: hr21-patch-1
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
9a957ec
add sentiment analysis notebook
sonniki Oct 6, 2019
f56253b
Add files via upload
pawan-sen Oct 6, 2019
b62e344
Added some advanced python inbuild functions explanation and implemet…
alokrkmv Oct 7, 2019
bc302b8
User name included
Oct 7, 2019
379e030
How to deal with text data.
yashmalpani Oct 7, 2019
5c93ff1
Merge pull request #1 from yashmalpani/yashmalpani-patch-1
yashmalpani Oct 7, 2019
853bd0d
Added demo for basic operations on files
brilam Oct 7, 2019
ca3c986
Merge pull request #68 from hr21/hr21-patch-1
harsh2ai Oct 8, 2019
5155d29
Add an example for parsing arguments
iainbowman Oct 8, 2019
c3b350d
Contributor Update
lakshmikanthmkce Oct 9, 2019
a3d2bf4
Added reference links for ML roadmap
lakshmikanthmkce Oct 9, 2019
67edff0
Merge pull request #71 from Lakshmikanth-mkce/patch-2
harsh2ai Oct 9, 2019
69fc4cf
Merge pull request #70 from Lakshmikanth-mkce/patch-1
harsh2ai Oct 9, 2019
712c809
Merge pull request #69 from iainbowman/args
harsh2ai Oct 9, 2019
a2a1207
Merge pull request #56 from sonniki/master
harsh2ai Oct 9, 2019
d65f839
Merge pull request #62 from pawan-sen/master
harsh2ai Oct 9, 2019
4850bdc
Merge pull request #63 from alokrkmv/master
harsh2ai Oct 9, 2019
413a687
Merge pull request #64 from victox5/master
harsh2ai Oct 9, 2019
2db12e8
Update README.md
davecampbell Oct 9, 2019
e4dcc67
Merge pull request #1 from davecampbell/davecampbell-patch-1
davecampbell Oct 9, 2019
299a995
create rock, paper,scissor program
hyderr Oct 10, 2019
de200de
Merge pull request #73 from hyderr/patch-5
harsh2ai Oct 10, 2019
87808b6
Merge pull request #72 from davecampbell/master
harsh2ai Oct 10, 2019
ae47638
Merge pull request #66 from brilam/master
harsh2ai Oct 10, 2019
739311d
Adding jupyter notebook for basics of XGBoost
Pratham1807 Oct 12, 2019
2482f5d
Update README.md
Apoorva-jain Oct 15, 2019
6e28995
Merge pull request #65 from yashmalpani/master
harsh2ai Oct 16, 2019
ceda287
Merge pull request #89 from Apoorva-jain/master
harsh2ai Oct 16, 2019
f7e5a13
Update README.md
harsh2ai Oct 16, 2019
48d103e
Update README.md
harsh2ai Oct 16, 2019
e7d5193
Merge pull request #90 from hr21/hr21-patch-2
harsh2ai Oct 16, 2019
3c7f809
Ensemble Model Update
sprashanthmohan Oct 17, 2019
e95e9b9
Merge pull request #1 from sprashanthmohan/ensemble-bagging-randomfor…
sprashanthmohan Oct 17, 2019
e27fc15
Merge pull request #91 from sprashanthmohan/master
harsh2ai Nov 1, 2019
1c5c16b
Merge pull request #78 from Pratham1807/master
harsh2ai Nov 1, 2019
369ebe2
Create test.py
jssam Sep 30, 2020
45012ef
Merge pull request #103 from jssam/master
harsh2ai Sep 30, 2020
cbdaf06
Update and rename test.py to next_smallest_palindromic.py
jssam Sep 30, 2020
fc09a63
Implemented KMP algorithm in python
anubhvshrma18 Sep 30, 2020
694617d
Merge pull request #105 from anubhvshrma18/anubhvshrma18-development
harsh2ai Sep 30, 2020
80b71c4
Merge pull request #104 from jssam/master
harsh2ai Sep 30, 2020
1311c12
Create readme.md
harsh2ai Oct 22, 2020
4409227
Merge pull request #106 from harsh2ai/harsh2ai-patch-1
harsh2ai Oct 22, 2020
dcabc68
type 00
harsh2ai Oct 22, 2020
c4551de
Merge pull request #107 from harsh2ai/harsh2ai-patch-2
harsh2ai Oct 22, 2020
64f9c89
basics syntax
harsh2ai Oct 22, 2020
a52f845
Merge pull request #108 from harsh2ai/harsh2ai-patch-3
harsh2ai Oct 22, 2020
d2f3fa8
Add files via upload
harsh2ai Oct 22, 2020
357374b
Merge pull request #109 from harsh2ai/harsh2ai-patch-4
harsh2ai Oct 22, 2020
ec865d5
Add files via upload
harsh2ai Oct 22, 2020
671c606
Add files via upload
harsh2ai Oct 22, 2020
849d629
Merge pull request #110 from harsh2ai/harsh2ai-patch-5
harsh2ai Oct 22, 2020
93c5bcc
Add files via upload
harsh2ai Oct 22, 2020
0c2952c
Merge pull request #111 from harsh2ai/harsh2ai-patch-6
harsh2ai Oct 22, 2020
c6c34a9
Add files via upload
harsh2ai Oct 22, 2020
8bcf421
Merge pull request #112 from harsh2ai/harsh2ai-patch-7
harsh2ai Oct 22, 2020
7784fe2
Bump pyyaml in /sample applications/sample_oo_dashboard
dependabot[bot] Mar 25, 2021
f252b05
Merge pull request #114 from harsh2ai/dependabot/pip/sample-applicati…
harsh2ai Mar 8, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
294 changes: 294 additions & 0 deletions 04-spiral_classification.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,294 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create the data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"import torch\n",
"from torch import nn, optim\n",
"import math\n",
"from IPython import display"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from res.plot_lib import plot_data, plot_model, set_default"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"set_default()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"seed = 12345\n",
"random.seed(seed)\n",
"torch.manual_seed(seed)\n",
"N = 1000 # num_samples_per_class\n",
"D = 2 # dimensions\n",
"C = 3 # num_classes\n",
"H = 100 # num_hidden_units"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X = torch.zeros(N * C, D).to(device)\n",
"y = torch.zeros(N * C, dtype=torch.long).to(device)\n",
"for c in range(C):\n",
" index = 0\n",
" t = torch.linspace(0, 1, N)\n",
" # When c = 0 and t = 0: start of linspace\n",
" # When c = 0 and t = 1: end of linpace\n",
" # This inner_var is for the formula inside sin() and cos() like sin(inner_var) and cos(inner_Var)\n",
" inner_var = torch.linspace(\n",
" # When t = 0\n",
" (2 * math.pi / C) * (c),\n",
" # When t = 1\n",
" (2 * math.pi / C) * (2 + c),\n",
" N\n",
" ) + torch.randn(N) * 0.2\n",
" \n",
" for ix in range(N * c, N * (c + 1)):\n",
" X[ix] = t[index] * torch.FloatTensor((\n",
" math.sin(inner_var[index]), math.cos(inner_var[index])\n",
" ))\n",
" y[ix] = c\n",
" index += 1\n",
"\n",
"print(\"Shapes:\")\n",
"print(\"X:\", tuple(X.size()))\n",
"print(\"y:\", tuple(y.size()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# visualise the data\n",
"plot_data(X, y)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Linear model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learning_rate = 1e-3\n",
"lambda_l2 = 1e-5"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# nn package to create our linear model\n",
"# each Linear module has a weight and bias\n",
"model = nn.Sequential(\n",
" nn.Linear(D, H),\n",
" nn.Linear(H, C)\n",
")\n",
"model.to(device) #Convert to CUDA\n",
"\n",
"# nn package also has different loss functions.\n",
"# we use cross entropy loss for our classification task\n",
"criterion = torch.nn.CrossEntropyLoss()\n",
"\n",
"# we use the optim package to apply\n",
"# stochastic gradient descent for our parameter updates\n",
"optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=lambda_l2) # built-in L2\n",
"\n",
"# Training\n",
"for t in range(1000):\n",
" \n",
" # Feed forward to get the logits\n",
" y_pred = model(X)\n",
" \n",
" # Compute the loss and accuracy\n",
" loss = criterion(y_pred, y)\n",
" score, predicted = torch.max(y_pred, 1)\n",
" acc = (y == predicted).sum().float() / len(y)\n",
" print(\"[EPOCH]: %i, [LOSS]: %.6f, [ACCURACY]: %.3f\" % (t, loss.item(), acc))\n",
" display.clear_output(wait=True)\n",
" \n",
" # zero the gradients before running\n",
" # the backward pass.\n",
" optimizer.zero_grad()\n",
" \n",
" # Backward pass to compute the gradient\n",
" # of loss w.r.t our learnable params. \n",
" loss.backward()\n",
" \n",
" # Update params\n",
" optimizer.step()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Plot trained model\n",
"print(model)\n",
"plot_model(X, y, model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Two-layered network"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learning_rate = 1e-3\n",
"lambda_l2 = 1e-5"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# nn package to create our linear model\n",
"# each Linear module has a weight and bias\n",
"\n",
"model = nn.Sequential(\n",
" nn.Linear(D, H),\n",
" nn.ReLU(),\n",
" nn.Linear(H, C)\n",
")\n",
"model.to(device)\n",
"\n",
"# nn package also has different loss functions.\n",
"# we use cross entropy loss for our classification task\n",
"criterion = torch.nn.CrossEntropyLoss()\n",
"\n",
"# we use the optim package to apply\n",
"# ADAM for our parameter updates\n",
"optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=lambda_l2) # built-in L2\n",
"\n",
"# e = 1. # plotting purpose\n",
"\n",
"# Training\n",
"for t in range(1000):\n",
" \n",
" # Feed forward to get the logits\n",
" y_pred = model(X)\n",
" \n",
" # Compute the loss and accuracy\n",
" loss = criterion(y_pred, y)\n",
" score, predicted = torch.max(y_pred, 1)\n",
" acc = (y == predicted).sum().float() / len(y)\n",
" print(\"[EPOCH]: %i, [LOSS]: %.6f, [ACCURACY]: %.3f\" % (t, loss.item(), acc))\n",
" display.clear_output(wait=True)\n",
" \n",
" # zero the gradients before running\n",
" # the backward pass.\n",
" optimizer.zero_grad()\n",
" \n",
" # Backward pass to compute the gradient\n",
" # of loss w.r.t our learnable params. \n",
" loss.backward()\n",
" \n",
" # Update params\n",
" optimizer.step()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Plot trained model\n",
"print(model)\n",
"plot_model(X, y, model)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:dl-minicourse] *",
"language": "python",
"name": "conda-env-dl-minicourse-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading