Introduction
ABCs of data science is intended for anyone who wants to learn more about data science, regardless of skill level. It aims to give readers a high level overview of various data science concepts, so that they can explore these topics further.
Please leave a comment on the blogs if you think I’m missing something or if there is something you want me to cover. You can also follow the ABC’s of data science twitter account or the RSS feed to get notified when new posts are available.
- A is for Artificial Intelligence
- B is for Bias
- C is for Clustering
- D is for Deep Learning
- E is for Embeddings
- F is for F1 Score
- G is for Gradient Descent
- H is for HDBSCAN
- I is for Interpretability
- J is for Jaccard Metric
- K is for K-fold Cross-Validation
- L is for Labelling Data
- M is for Munging Data
- N is for Natural Language Processing
- O is for Outlier Detection
- P is for Pandas
- Q is for Q-learning
- R is for Reproducibility
- S is for Supervised Learning
- T is for Transfer Learning
- U is for UMAP
- V is for Visualization
- W is for Wasserstein GANs
- X is for XGBoost
- Y is for You Should Talk to Your Clients
- Z is for Zero to Done
Posts
Z is for Zero to Done
Y is for You Should Talk to Your Clients
X is for XGBoost
W is for Wasserstein GANs
V is for Visualization
U is for UMAP
T is for Transfer Learning
S is for Supervised Learning
R is for Reproducibility
Q is for Q-learning
P is for Pandas
O is for Outlier Detection
N is for Natural Language Processing (NLP)
M is for Munging Data
L is for Labelling Data
- •
- 1
- 2