Data scientists require an extensive training set to train an accurate and reliable machine learning model – the bigger and diverse the training set, the better. However, acquiring such a vast training set can be difficult, especially when sensitive user data is involved. The General Data Protection Regulation (GDPR) and similar regulations may prohibit the gathering and processing of this sensitive data. Privacy-preserving cryptographic protocols and primitives, like secure multi-party computation (MPC) and fully homomorphic encryption (FHE), may provide a solution to this problem. They allow us to perform calculations on private and unknown data and can, therefore, be used to classify and train on GDPR protected data sets. While still considered very inefficient, privacy-preserving machine learning using MPC and FHE has been heavily researched in recent years. In this chapter, we give an introduction to MPC and FHE, how they can be used, their limitations, and describe how state-of-the-art publications apply them to machine learning algorithms.