Solved – Can we make the machine learn Gates (OR AND XOR etc.)

Below is the NAND Gate truth table, there are 2 independent features A,B and one dependent feature Y in the dataset.

Can we make the machine learn this if YES how ? if No why ?

Please go through the attempt below where the classifier model stumble on [0,0] point and can't predict it correctly giving .75 accuracy.

enter image description here

import pandas as pd  from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix,classification_report,accuracy_score  x_train = [[0,0],[0,1],[1,0],[1,1]] y_train = [1,1,1,0]  x_test = [[1,1],[1,0],[0,1],[0,0]]  clf_lr = LogisticRegression() clf_lr.fit(x_train,y_train) prediction = clf_lr.predict(x_test)  print(prediction)  [1 1 1 1]  print(accuracy_score(y_train,prediction)) 0.75  print(confusion_matrix(y_train,prediction))  [[0 1]  [0 3]]  print(classification_report(y_train,prediction))                 precision    recall  f1-score   support            0       0.00      0.00      0.00         1           1       0.75      1.00      0.86         3  avg / total       0.56      0.75      0.64         4 

The answer to the title question is "yes," many machine learning models are capable of learning various logical gates. @KarelMacek is correct that the XOR gate is famously not linearly separable, so logistic regression will not be able to learn that one.

But your example is the NAND gate, which is linearly separable, so a logistic regression should be able to learn it. The problem is that sklearn applies L2 regularization by default, which is preventing the model from learning the pattern "strongly" enough. You can see from clf_lr.predict_proba(x_test)[:, 1] that the model has learned that $(1,1)$ is less likely to be a 1, but the regularization has prevented the probability from dropping below $0.5$. You can reduce the regularization strength, or better set penalty='none', to recover correct predictions.

Similar Posts:

Rate this post

Leave a Comment