One of the great things about learning data science at Lambda School is that after all of the sprint challenges, assessments, and code challenges, you still have to prove your acumen by working on a real-world, cross-functional project. They call this portion of the program Lambda Labs, and in my Labs experience, I got to work on a project called Citrics. The idea for this project was to solve a problem faced by nomads (people who move frequently), which was the cumbersome nature of trying to compare various statistics for cities throughout the US.
Imagine if you were going to live in three different cities over the next three years: how would you choose where to go? You might want to know what rental prices looked liked, or which job industry was the most prevalent, or maybe even how “walkable” a city was. The truth is, there are probably lots of things we’d like to know before moving, but we probably don’t have hours and hours to research 10 different websites for these answers. That’s where Citrics comes in. …
Potential steps to resolve dreaded “Degraded” and “Severe” status
Working with AWS offers some useful tools for API deployment, but certainly don’t make a project immune from API-crashing bugs. This post will walk through some potential issues users could face when managing APIs and databases through Elastic Beanstalk (EB) and RDS, and hopefully provide solutions when your EB Health looks scary!
This post will explore two implementations of the K-Nearest Neighbors algorithm in base python (without scikit-learn), and compare classification results on the iris dataset with those of a scikit-learn implementation.
Nearest neighbors is a relatively simple, but versatile algorithm that can be used for both regression and classification problems. In the case of classifying labels for example, the concept behind the algorithm is to compare the distance from a new observation to that of each observation in a training set, and return the “closest” k-neighbors of the new observation.
The “k” in k-neighbors would represent the number of neighbors to pull, and this is where some of the optionality would begin. There’s variation in how a nearest neighbors model determines distance, generates a prediction, and more. The implementations in this post will explore basic setups and use Euclidian Distance as the distance measure. …
There’s an old adage among fantasy football players that says “you can’t win a league in the draft, but you can certainly lose it.” For many who play, the draft is the most anticipated, exciting and enjoyable part of the fantasy football season, but it’s far from a perfect process. Players scour the internet and absorb as much information as possible, including pre-season rankings, projections and of course, average draft position. The problem is none of these methods guarantee you’ll get it right on draft day, so maybe there’s a better way. …
About