Detecting Pulsar Stars in Space using Neural Networks

Malhar Bhide
The Startup
Published in
11 min readJun 5, 2020

--

Image by NASA

Pulsars are a rare type of Neutron star that contain more mass than the Sun while being as relatively small as a city. Scientists are intrigued by these space ‘anomalies’ and hope to use Pulsars to study concepts such as extreme states of matter, cosmic distances, and space-time.

Pulsars produce radio emission that is used to detect their presence. As Pulsars rotate they radiate two narrow beams of light in opposite directions. These beams of light produce a detectable pattern of broadband radio emission. Beams are radiated periodically, resulting in a repeating pattern. A signal that is detected is noted as a ‘candidate’, and each candidate could possibly be a Pulsar, however, most candidates are as a result of radio frequency interference (RFI), and so detecting Pulsars is quite a hard task.

In this article, I will show how to create a neural network to detect Pulsars.

  1. Importing Libraries and Initializing the Data

Python is an extremely popular programming language when it comes to data analysis, this is partly due to the comprehensive libraries that help perform different tasks while being easy-to-use. Pandas and Numpy are libraries that help with arrays, data frames, analyzing data, etc. On the other hand, Matplotlib and Seaborn are libraries that help with data visualization. Since we are going to be working with these libraries, we need to import them before we write any code.

The pulsar_stars.csv contains 17898 entries of Pulsar candidates, out of which we know for sure that 16,259 candidates have been caused by RFI while the remaining 1,639 candidates are real Pulsar examples. We will use this knowledge, along with the 8 columns or features to detect a pattern in recognizing Pulsars.

The following is the output of df.info(). It shows the eight different features along with the target. The target is our label, that contains either a 1 for a Pulsar or a 0 for otherwise. In this article, we won’t dwell too much on what each feature/column means, the basic idea is that we will use these corresponding columns/features to create a model that predicts whether a candidate is a real Pulsar given new instances of the same features to analyze.

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 17898 entries, 0 to 17897
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Mean of the integrated profile 17898 non-null float64
1 Standard deviation of the integrated profile 17898 non-null float64
2 Excess kurtosis of the integrated profile 17898 non-null float64
3 Skewness of the integrated profile 17898 non-null float64
4 Mean of the DM-SNR curve 17898 non-null float64
5 Standard deviation of the DM-SNR curve 17898 non-null float64
6 Excess kurtosis of the DM-SNR curve 17898 non-null float64
7 Skewness of the DM-SNR curve 17898 non-null float64
8 target_class 17898 non-null int64

2. Splitting and Preprocessing the Data

Before training a model, it is important to split the data into two segments: train and test. The train segment, as the name suggests is supplied to help train and fit the model and its parameters while the test segment is used to evaluate the model. We will be creating a neural network (NN). NNs update their weights based on training data only.

train_test_split is a function belonging to the Scikit Learn library (a library that is used for Machine Learning), that splits the data into training data and testing data. The parameters in the train_test_split, signify the X variable (the features), the y variable (the target class), and the test size. So in this case, 30% of the data will be randomly selected to be the test data, while the remaining 70% will serve as training data.

As you can see, there is another step after splitting the data. This is the part where you scale your data. In this case, the MinMaxScaler (from Scikit Learn) has been used to scale the data. There are other options such as StandardScaler, too. It is good practice to scale data before using it since the numbers become much easier to work with. The two steps in scaling data are first fitting the scaler object with your training set and then transforming your training set, testing set, and any other set of data you wish to use with the model. It is important to split the data first, as the scaler object should be fit by only the training data.

3. Creating and Fitting the Neural Network model

Now it’s time to actually create the NN. The two libraries we will be using to create, train, and evaluate our NN model, are TensorFlow and Keras. We will be using TensorFlow 2.0 which has Keras integrated into it.

Sequential is our NN model class. A Dense layer is a regular, deeply connected layer in which each perceptron receives input from all the perceptrons in the previous later. A Dropout layer, on the other hand, is used to prevent overfitting, as it randomly shuts off the output of a select percentage of perceptrons from the previous layer. In this model, there is one input layer, nine hidden layers, and one output layer. The activation function used for the input and hidden layers is the Rectified Linear Unit function which is basically a (0, max) function.

For the output layer, the Sigmoid activation function has been used, which outputs a value between 0 and 1. The purpose of using Sigmoid is to get a probability of the candidate being a Pulsar. The key to creating the perfect ANN model is experimentation.

The optimizer that has been used is Adam. Adam is an optimizer that changes the learning rate while performing Gradient Descent during Backpropagation. As for the loss function, since this is a binary classification, the Binary Cross-Entropy loss function has been utilized. The idea of a Neural Network is that the loss is calculated after the feed-forward, and that loss function is minimized with respect to each weight during Bacakpropogation.

I have also used EarlyStopping in order to prevent overfitting. EarlyStopping monitors a certain metric(in this case the loss of the validation/test data) and stops further fitting the model when the metric it is monitoring starts changing for the worse (if the loss starts increasing or the accuracy starts decreasing).

Finally, fitting the model. While using a Callback such as EarlyStopping, it is good to specify a higher number of Epochs (One epoch is when the entire dataset is passed forward and backward once). So to fit the model, we specify the training data, the label, the number of epochs, the validation data (which we want to monitor with our Callback), and finally our Callback(s).

Train on 12528 samples, validate on 5370 samples
Epoch 1/1000
12528/12528 [==============================] - 2s 137us/sample - loss: 0.5806 - val_loss: 0.4118
Epoch 2/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.4070 - val_loss: 0.2503
Epoch 3/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.2952 - val_loss: 0.1446
Epoch 4/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.2473 - val_loss: 0.1195
Epoch 5/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.2267 - val_loss: 0.1084
Epoch 6/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.2090 - val_loss: 0.1027
Epoch 7/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1990 - val_loss: 0.1006
Epoch 8/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1961 - val_loss: 0.1025
Epoch 9/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1895 - val_loss: 0.1014
Epoch 10/1000
12528/12528 [==============================] - 1s 54us/sample - loss: 0.1752 - val_loss: 0.0987
Epoch 11/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1814 - val_loss: 0.1027
Epoch 12/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1680 - val_loss: 0.0973
Epoch 13/1000
12528/12528 [==============================] - 1s 54us/sample - loss: 0.1760 - val_loss: 0.1003
Epoch 14/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1658 - val_loss: 0.0990
Epoch 15/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1683 - val_loss: 0.1007
Epoch 16/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1631 - val_loss: 0.0992
Epoch 17/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1628 - val_loss: 0.0996
Epoch 18/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1684 - val_loss: 0.0967
Epoch 19/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1664 - val_loss: 0.0978
Epoch 20/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1564 - val_loss: 0.0957
Epoch 21/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1569 - val_loss: 0.0969
Epoch 22/1000
12528/12528 [==============================] - 1s 54us/sample - loss: 0.1555 - val_loss: 0.0968
Epoch 23/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1594 - val_loss: 0.1002
Epoch 24/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1597 - val_loss: 0.0949
Epoch 25/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1636 - val_loss: 0.0989
Epoch 26/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1492 - val_loss: 0.0944
Epoch 27/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1588 - val_loss: 0.0974
Epoch 28/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1535 - val_loss: 0.0970
Epoch 29/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1602 - val_loss: 0.0993
Epoch 30/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1514 - val_loss: 0.1014
Epoch 31/1000
12528/12528 [==============================] - ETA: 0s - loss: 0.154 - 1s 55us/sample - loss: 0.1556 - val_loss: 0.0952
Epoch 32/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1541 - val_loss: 0.0940
Epoch 33/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1542 - val_loss: 0.0935
Epoch 34/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1545 - val_loss: 0.0942
Epoch 35/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1498 - val_loss: 0.0923
Epoch 36/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1474 - val_loss: 0.0924
Epoch 37/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1474 - val_loss: 0.0966
Epoch 38/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1446 - val_loss: 0.0947
Epoch 39/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1573 - val_loss: 0.0975
Epoch 40/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1485 - val_loss: 0.0948
Epoch 41/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1514 - val_loss: 0.0923
Epoch 42/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1464 - val_loss: 0.0917
Epoch 43/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1475 - val_loss: 0.0918
Epoch 44/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1432 - val_loss: 0.0944
Epoch 45/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1546 - val_loss: 0.0917
Epoch 46/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1512 - val_loss: 0.0966
Epoch 47/1000
12528/12528 [==============================] - 1s 63us/sample - loss: 0.1530 - val_loss: 0.0928
Epoch 48/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1469 - val_loss: 0.0920
Epoch 49/1000
12528/12528 [==============================] - 1s 60us/sample - loss: 0.1509 - val_loss: 0.0941
Epoch 50/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1514 - val_loss: 0.0929
Epoch 51/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1504 - val_loss: 0.0932
Epoch 52/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1518 - val_loss: 0.0936
Epoch 53/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1475 - val_loss: 0.0927
Epoch 54/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1573 - val_loss: 0.0963
Epoch 55/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1423 - val_loss: 0.0913
Epoch 56/1000
12528/12528 [==============================] - 1s 60us/sample - loss: 0.1484 - val_loss: 0.0919
Epoch 57/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1491 - val_loss: 0.0923
Epoch 58/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1478 - val_loss: 0.0908
Epoch 59/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1488 - val_loss: 0.0939
Epoch 60/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1521 - val_loss: 0.0934
Epoch 61/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1413 - val_loss: 0.0901
Epoch 62/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1506 - val_loss: 0.0922
Epoch 63/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1425 - val_loss: 0.0900
Epoch 64/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1525 - val_loss: 0.0912
Epoch 65/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1520 - val_loss: 0.0940
Epoch 66/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1521 - val_loss: 0.0957
Epoch 67/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1499 - val_loss: 0.0930
Epoch 68/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1466 - val_loss: 0.0919
Epoch 69/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1528 - val_loss: 0.0915
Epoch 70/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1476 - val_loss: 0.0924
Epoch 71/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1477 - val_loss: 0.0911
Epoch 72/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1575 - val_loss: 0.0926
Epoch 73/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1532 - val_loss: 0.0922
Epoch 74/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1525 - val_loss: 0.0909
Epoch 75/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1532 - val_loss: 0.0920
Epoch 76/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1447 - val_loss: 0.0911
Epoch 77/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1529 - val_loss: 0.0895
Epoch 78/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1514 - val_loss: 0.0903
Epoch 79/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1468 - val_loss: 0.0908
Epoch 80/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1452 - val_loss: 0.0891
Epoch 81/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1501 - val_loss: 0.0898
Epoch 82/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1484 - val_loss: 0.0915
Epoch 83/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1579 - val_loss: 0.0924
Epoch 84/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1436 - val_loss: 0.0952
Epoch 85/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1461 - val_loss: 0.0927
Epoch 86/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1401 - val_loss: 0.0896
Epoch 87/1000
12528/12528 [==============================] - 1s 61us/sample - loss: 0.1467 - val_loss: 0.0941
Epoch 88/1000
12528/12528 [==============================] - 1s 60us/sample - loss: 0.1613 - val_loss: 0.0971
Epoch 89/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1451 - val_loss: 0.0896
Epoch 90/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1482 - val_loss: 0.0916
Epoch 91/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1509 - val_loss: 0.0936
Epoch 92/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1513 - val_loss: 0.0928
Epoch 93/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1472 - val_loss: 0.0909
Epoch 94/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1475 - val_loss: 0.0918
Epoch 95/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1494 - val_loss: 0.0949
Epoch 96/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1465 - val_loss: 0.0900
Epoch 97/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1465 - val_loss: 0.0894
Epoch 98/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1404 - val_loss: 0.0897
Epoch 99/1000
12528/12528 [==============================] - 1s 60us/sample - loss: 0.1496 - val_loss: 0.0914
Epoch 100/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1396 - val_loss: 0.0869
Epoch 101/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1502 - val_loss: 0.0894
Epoch 102/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1542 - val_loss: 0.0917
Epoch 103/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1479 - val_loss: 0.0901
Epoch 104/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1493 - val_loss: 0.0891
Epoch 105/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1498 - val_loss: 0.0903
Epoch 106/1000
12528/12528 [==============================] - 1s 62us/sample - loss: 0.1497 - val_loss: 0.0939
Epoch 107/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1479 - val_loss: 0.0908
Epoch 108/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1458 - val_loss: 0.0906
Epoch 109/1000
12528/12528 [==============================] - 1s 62us/sample - loss: 0.1444 - val_loss: 0.0892
Epoch 110/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1439 - val_loss: 0.0928
Epoch 111/1000
12528/12528 [==============================] - 1s 59us/sample - loss: 0.1456 - val_loss: 0.0888
Epoch 112/1000
12528/12528 [==============================] - 1s 58us/sample - loss: 0.1402 - val_loss: 0.0964
Epoch 113/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1520 - val_loss: 0.0981
Epoch 114/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1537 - val_loss: 0.0999
Epoch 115/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1485 - val_loss: 0.0911
Epoch 116/1000
12528/12528 [==============================] - 1s 55us/sample - loss: 0.1550 - val_loss: 0.0906
Epoch 117/1000
12528/12528 [==============================] - 1s 57us/sample - loss: 0.1447 - val_loss: 0.0883
Epoch 118/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1516 - val_loss: 0.0892
Epoch 119/1000
12528/12528 [==============================] - 1s 53us/sample - loss: 0.1533 - val_loss: 0.0896
Epoch 120/1000
12528/12528 [==============================] - 1s 56us/sample - loss: 0.1456 - val_loss: 0.0897

So as you can see, the model has finished being fit and it was run for only 120 epochs. You can also see how the loss and validation loss (val_loss) are generally reducing with each epoch. This shows that our model is becoming better adapted to our data. However, if the validation loss spikes or doesn’t reduce, while the loss does, it means our model has been overfitted. Our model is perfectly fit (mainly due to our Dropout Layers and the Callback), as a validation loss of 0.087 is extremely good.

4) Evaluating the Model

Now in the final step, we evaluate our model. From now on, we can use our model to detect whether a candidate is a Pulsar. So let’s quantify how efficient our model is. First, we use the model to predict the labels of the test data (we used this test data as validation data, but our model’s weights were not affected by the test data). The predict_classes method enables us to do our classification prediction.

Now, to format the data, a Pandas DataFrame has been created with two columns: True and Predicted, where True is the collection of the actual labels of the test data, and Predicted is our predicted labels of the test data. The two metrics we will use to evaluate are a Confusion Matrix and a Classification Report.

A Classification Report uses data from the Confusion Matrix to calculate four metrics: precision, recall, f1-score, and accuracy. The f1-score is the harmonic mean of the precision and the recall and is usually the best metric to evaluate a binary classification model.

Classification Report:

precision    recall  f1-score   support

0 0.98 1.00 0.99 4866
1 0.96 0.77 0.85 504

accuracy 0.98 5370
macro avg 0.97 0.88 0.92 5370
weighted avg 0.97 0.98 0.97 5370

A weighted average f1-score of 0.98 is near perfect. A 0.98 accuracy is also great, however, accuracy at times can be misleading when there isn’t a balance in the test labels (as it is in this case due to there being significantly less Pulsar candidates).

Confusion Matrix:

The confusion matrix is very easy to interpret. This matrix says that we made 5237 (4851 + 386) correct classifications and 133 (118 + 15) incorrect classifications. That is quite good!

[[4851   15]
[ 118 386]]

I hope this article displayed how powerful of a tool Machine Learning/Deep Learning is. We were able to successfully create, train, and test our own Deep Learning model that is able to automatically detect Pulsars in space.

--

--

Malhar Bhide
The Startup

Studying Computer Science at the University of Illinois Urbana-Champaign