In recent years, convolutional neural networks (CNNs) have shown promising results for applications in computer-aided diagnostics. Despite their success, CNNs are constrained by the amount of data required to train them. Deep CNNs rely on large datasets, and although data augmentation techniques can be applied when sufficiently large datasets are not available, these techniques introduce biases into data distributions that can yield suboptimal results.** Transfer learning** is a biologically inspired approach to help deep learning models achieve state-of-the-art results without needing vast amounts of training data. Transfer learning is motivated by the fact that humans can intelligently apply knowledge in…

In Introducing the pervert’s dilemma, Carl Öhman proposes a mode of philosophical inquiry to solve, at least partially, what he calls the *pervert’s dilemma*. This problem focuses on the seemingly contradictory moral intuitions we have about sexual fantasies and Deepfake pornography as a society. The dilemma is framed as follows. Consider conditions A and B, where if a person is being fantasized about, A) that person will never find out they are the subject of a fantasy, and B) it is impossible to ever share the contents of the fantasy with anyone. Herein lies the dilemma:

1. Creating pornographic Deepfake…

This commentary will focus on the debate between Hatherley and Ferrario at al., where they discuss the topic of trust in medical AI. In *Limits of trust in medical AI*, Hatherley presents his concerns for the potential deficit of trust in clinical relationships with the displacement of epistemic authority of human doctors with the introduction of AI systems. His argument builds on classical discussions of trust where a distinction is made between reliability and trust. In these discussions, reliability is designated for those expectations which are formed descriptively or empirically. This contrasts to trust, which is preserved for those expectations…

This post discusses some of the issues surrounding opacity in AI. In particular, I discuss: **Is AI opaque in a way that other technologies are not? Should we aspire to or enforce some degree of transparency? What are the implications of pushing on with research and development in this space if we do not or cannot resolve this?**

While a precise definition of ‘opacity’ in AI systems remains undefined (with Zachary Lipton arguing that any sufficient definition has failed to reach working consensus), much of the discussion surrounding opacity has focused on the production of human-interpretable explanations, or *clues*, given…

One way to learn about machine learning is to simply play around with some models. Libraries such as scikit-learn are packed with tons of pre-built models that you can instantiate, fit and begin predicting within a few lines of code. Romping with the parameters and attributes of these models is a wonderful way to get your feet wet with the code before applying them to a problem you’re trying to solve.

Generating random data from predetermined functions gives you a way to experiment with models in a structured way. For example, you could generate data from a logarithmic function to…

This post builds off of material presented by Andrew Chamberlain in his article “The Linear Algebra View of Least-Squares Regression”. He does an excellent job of presenting linear regression within a linear algebraic model, and here I attempt to fill in some of the missing gaps that I encountered when first reading his post. This post alone will by no means expose you to the core ideas presented by Chamberlain, so I encourage you to read his post first and to use this as a supplementary reference. …

In this tutorial, we’ll go through how to implement a simple linear regression model using a least squares approach to fit the data. After that, we’ll extend the model to a polynomial regression model in order to capture more complex signals. We’ll be using the mean squared error to measure the quality of fit for every model we generate. You can download all the resources I used to write this article from my Github repo 👍.

Let’s start by loading the training data into the memory and plotting it as a graph to see what we’re working with. Think of…