programs inject guidelines written by people. However what if a neural community may uncover these guidelines itself?
On this experiment, I lengthen a hybrid neural community with a differentiable rule-learning module that robotically extracts IF-THEN fraud guidelines throughout coaching. On the Kaggle Credit score Card Fraud dataset (0.17% fraud fee), the mannequin realized interpretable guidelines comparable to:
IF V14 < −1.5σ AND V4 > +0.5σ → Fraud
the place σ denotes the characteristic normal deviation after normalization.
The rule learner achieved ROC-AUC 0.933 ± 0.029, whereas sustaining 99.3% constancy to the neural community’s predictions.
Most apparently, the mannequin independently rediscovered V14 — a characteristic lengthy identified by analysts to correlate strongly with fraud — with out being informed to search for it.
This text presents a reproducible neuro-symbolic AI experiment exhibiting how a neural community can uncover interpretable fraud guidelines straight from information.
Full code: github.com/Emmimal/neuro-symbolic-ai-fraud-pytorch
What the Mannequin Found
Earlier than the structure, the loss operate, or any coaching particulars — here’s what got here out the opposite finish.
After as much as 80 epochs of coaching (with early stopping, most seeds converged between epochs 56–78), the rule learner produced these within the two seeds the place guidelines emerged clearly:
Seed 42 — cleanest rule (5 circumstances, conf=0.95)
Discovered Fraud Rule — Seed 42 · Guidelines have been by no means hand-coded
IF V14 < −1.5σ
AND V4 > +0.5σ
AND V12 < −0.9σ
AND V11 > +0.5σ
AND V10 < −0.8σ
THEN FRAUD
Seed 7 — complementary rule (8 circumstances, conf=0.74)
Discovered Fraud Rule — Seed 7 · Guidelines have been by no means hand-coded
IF V14 < −1.6σ
AND V12 < −1.3σ
AND V4 > +0.3σ
AND V11 > +0.5σ
AND V10 < −1.0σ
AND V3 < −0.8σ
AND V17 < −1.5σ
AND V16 < −1.0σ
THEN FRAUD
In each instances, low values of V14 sit on the coronary heart of the logic — a placing convergence given zero prior steerage.
The mannequin was by no means informed which characteristic mattered.
But it independently rediscovered the identical characteristic human analysts have recognized for years.
A neural community discovering its personal fraud guidelines is strictly the promise of neuro-symbolic AI: combining statistical studying with human-readable logic. The remainder of this text explains how — and why the gradient stored discovering V14 even when informed nothing about it.
From Injected Guidelines to Discovered Guidelines — Why It Issues
Each fraud mannequin has a choice boundary. Fraud groups, nevertheless, function utilizing guidelines. The hole between them, between what the mannequin realized and what analysts can learn, audit, and defend to a regulator — is the place compliance groups dwell and die.
In my previous article in this series, I encoded two analyst guidelines straight into the loss operate: if the transaction quantity is unusually excessive and if the PCA signature is anomalous, deal with the pattern as suspicious. That method labored. The hybrid mannequin matched the pure neural web’s detection efficiency whereas remaining interpretable.
However there was an apparent limitation I left unaddressed. I wrote these guidelines. I selected these two options as a result of they made intuitive sense to me. Hand-coded guidelines encode what you already know, they’re a superb answer when fraud patterns are steady and area information is deep. They’re a poor answer when fraud patterns are shifting, when crucial options are anonymized (as they’re on this dataset), or whenever you need the mannequin to floor alerts you haven’t thought to search for.
The pure subsequent query: what options would the gradient select, if given the liberty to decide on?
This sample extends past fraud. Medical analysis programs want guidelines that medical doctors can confirm earlier than performing. Cybersecurity fashions want guidelines that engineers can audit. Anti-money laundering programs function beneath regulatory frameworks requiring explainable choices. In any area combining uncommon occasions, area experience, and compliance necessities, the flexibility to extract auditable IF-THEN guidelines from a educated neural community is straight precious.
Architecturally, the change is surprisingly easy. You aren’t changing the MLP, you might be including a second path that learns to precise the MLP’s choices as human-readable symbolic guidelines. The MLP trains usually. The rule module learns to agree with it, in symbolic kind. That’s the topic of this text: differentiable rule induction in ~250 strains of PyTorch, with no prior information of which options matter.
“You aren’t changing the neural community. You might be instructing it to elucidate itself.”
The Structure: Three Learnable Items
The structure retains a regular neural community intact, however provides a second path that learns symbolic guidelines explaining the community’s choices. The 2 paths run in parallel from the identical enter and their outputs are mixed by a learnable weight α:
The MLP path is an identical to the earlier article: three absolutely related layers with batch normalization. The rule path is new. Alpha is a learnable scalar that the mannequin makes use of to weight the 2 paths, it begins at 0.5 and is educated by gradient descent like some other parameter. After coaching, α converged to roughly 0.88 on common throughout seeds (vary: 0.80–0.94). The mannequin realized to weight the neural path at roughly 88% and the rule path at 12% on common. The principles should not changing the MLP, they’re a structured symbolic abstract of what the MLP realized.
1. Learnable Discretizer
Guidelines want binary inputs — is V14 beneath a threshold? sure or no. Neural networks want steady, differentiable operations. The gentle sigmoid threshold bridges each.
For every characteristic f and every learnable threshold t:
The place:
- is the worth of characteristic *f* for this transaction
- t is a learnable threshold, initialized randomly, educated by backpropagation
- is temperature — excessive early in coaching (exploratory), low later (crisp)
- is the gentle binary output: “is characteristic *f* above threshold *t*?”
The mannequin learns three thresholds per characteristic, giving it three “cuts” per dimension. Every threshold is unbiased — the mannequin can unfold them throughout the characteristic’s vary or focus them round essentially the most discriminative cutpoint.

At τ=5.0 (epoch 0): the sigmoid is nearly flat. Each characteristic worth produces a gradient. The mannequin explores freely. At τ=0.1 (epoch 79): the sigmoid is almost a step operate. Thresholds have dedicated. The boundaries are readable as human circumstances.
class LearnableDiscretizer(nn.Module):
def __init__(self, n_features, n_thresholds=3):
tremendous().__init__()
# One learnable threshold per (characteristic × bin)
self.thresholds = nn.Parameter(
torch.randn(n_features, n_thresholds) * 0.5
)
self.n_thresholds = n_thresholds
def ahead(self, x, temperature=1.0):
# x: [B, F] → output: [B, F * n_thresholds] gentle binary options
x_exp = x.unsqueeze(-1) # [B, F, 1]
t_exp = self.thresholds.unsqueeze(0) # [1, F, T]
soft_bits = torch.sigmoid(
(x_exp - t_exp) / temperature
)
return soft_bits.view(x.measurement(0), -1) # [B, F*T]
2. Rule Learner Layer
Every rule is a weighted mixture of binarized options, handed by means of a sigmoid:
The signal of every weight has a direct interpretation after tanh squashing:
- → characteristic have to be HIGH for this rule to fireside
- → characteristic have to be LOW for this rule to fireside
- → characteristic is irrelevant to this rule
Rule extraction follows straight: threshold absolutely the weight values after coaching to establish which options every rule makes use of. That is how IF-THEN statements emerge from steady parameters — by studying the load matrix.
class RuleLearner(nn.Module):
def __init__(self, n_bits, n_rules=4):
tremendous().__init__()
# w_{r,i}: which binarized options matter for every rule
self.rule_weights = nn.Parameter(
torch.randn(n_rules, n_bits) * 0.1
)
# confidence: relative significance of every rule
self.rule_confidence = nn.Parameter(torch.ones(n_rules))
def ahead(self, bits, temperature=1.0):
w = torch.tanh(self.rule_weights) # bounded in (-1, 1)
logits = bits @ w.T # [B, R]
rule_acts = torch.sigmoid(logits / temperature) # [B, R]
conf = torch.softmax(self.rule_confidence, dim=0)
fraud_prob = (rule_acts * conf.unsqueeze(0)).sum(dim=1, keepdim=True)
return fraud_prob, rule_acts
3. Temperature Annealing
The temperature follows an exponential decay schedule:
With τ_start=5.0, τ_end=0.1, T=80 epochs:
| Epoch | τ | State |
|---|---|---|
| 0 | 5.00 | Guidelines absolutely gentle — gradient flows in every single place |
| 40 | 0.69 | Guidelines tightening — thresholds committing |
| 79 | 0.10 | Guidelines near-crisp — readable as IF-THEN |

def get_temperature(epoch, total_epochs, tau_start=5.0, tau_end=0.1):
progress = epoch / max(total_epochs - 1, 1)
return tau_start * (tau_end / tau_start) ** progress
With out annealing, the mannequin stays gentle and guidelines by no means crystallize into something a fraud analyst can learn or a compliance workforce can log out on. Annealing is what converts a steady optimization right into a symbolic output.
Earlier than the loss operate — a fast observe on the place this concept comes from, and what makes this implementation completely different from prior work.
Standing on the Shoulders of ∂ILP, NeuRules, and FINRule
It’s price situating this work within the current literature not as a full survey, however to make clear what concepts are borrowed and what’s new.
Differentiable Inductive Logic Programming launched the core concept that inductive logic programming historically a combinatorial search downside — may be reformulated as a differentiable program educated with gradient descent. The important thing perception used right here is the usage of gentle logical operators that permit gradients to movement by means of rule-like constructions. Nonetheless, ∂ILP requires predefined rule templates and background information declarations, which makes it more durable to combine into normal deep studying pipelines.
Latest work making use of differentiable guidelines to fraud detection comparable to FINRule — reveals that rule-learning approaches can carry out effectively even on extremely imbalanced monetary datasets. These research show that realized guidelines can match hand-crafted detection logic whereas adapting extra simply to new fraud patterns.
Different programs comparable to RIFF and Neuro-Symbolic Rule Lists introduce decision-tree-style differentiable guidelines and emphasize sparsity to take care of interpretability. The L1 regularization used on this implementation follows the identical precept: encouraging guidelines to depend on only some circumstances relatively than all obtainable options.
The implementation on this article combines these concepts differentiable discretization plus conjunction studying — however reduces them to roughly 250 strains of dependency-free PyTorch. No template language. No background information declarations. The aim is a minimal rule-learning module that may be dropped into a regular coaching loop.
Three-Half Loss: Detection + Consistency + Sparsity
The total coaching goal:
L_BCE — Weighted Binary Cross-Entropy
An identical to the previous article. pos_weight = depend(y=0) / depend(y=1) ≈ 578. One labeled fraud pattern generates 578× the gradient of a non-fraud pattern. This time period is unchanged the rule path provides no complexity to the core detection goal.
L_consistency — The New Time period
Guidelines ought to agree with the MLP the place the MLP is assured. Operationally: MSE between rule_prob and mlp_prob, masked to predictions the place the MLP is both clearly fraud (>0.7) or clearly non-fraud (<0.3):
confident_mask = (mlp_prob > 0.7) | (mlp_prob < 0.3)
if confident_mask.sum() > 0:
consist_loss = F.mse_loss(
rule_prob.squeeze()[confident_mask],
mlp_prob.squeeze()[confident_mask].detach() # ← crucial
)
The .detach() is crucial: we’re instructing the principles to observe the MLP, not the opposite method round. The MLP stays the first learner. The unsure area (0.3–0.7) is intentionally excluded that’s the place guidelines would possibly catch one thing the MLP misses.
L_sparsity — Hold Guidelines Easy
L1 penalty on the uncooked (pre-tanh) rule weights: imply(|W_rules|). With out this, guidelines take up all 30 options and change into unreadable. With λ_s=0.25, the optimizer pushes irrelevant options towards zero whereas leaving genuinely helpful options — V14, V4, V12 — at |w| ≈ 0.5–0.8 after tanh squashing.
L_confidence — Kill Noise Guidelines
A small L1 penalty on the boldness logits (λ_conf=0.01) drives low-confidence guidelines towards zero weight within the output mixture, successfully eliminating them. With out this, a number of technically lively however meaningless guidelines seem with confidence 0.02–0.04 that obscure the true sign.
Ultimate hyperparameters: λ_c=0.3, λ_s=0.25, n_rules=4, λ_conf=0.01.
With the equipment in place here’s what it produced.
Outcomes: Does Rule Studying Work — and What Did It Discover?
Experimental Setup
- Dataset: Kaggle Credit score Card Fraud, 284,807 transactions, 0.173% fraud fee
- Break up: 70/15/15 stratified by class label, 5 random seeds [42, 0, 7, 123, 2024]
- Threshold: F1-maximizing on validation set, utilized symmetrically to check set
- Identical analysis protocol as Article 1
Detection Efficiency

| Mannequin | F1 (imply ± std) | PR-AUC (imply ± std) | ROC-AUC (imply ± std) |
|---|---|---|---|
| Isolation Forest | 0.121 | 0.172 | 0.941 |
| Pure Neural (Article 1) | 0.804 ± 0.020 | 0.770 ± 0.024 | 0.946 ± 0.019 |
| Rule Learner (this text) | 0.789 ± 0.032 | 0.721 ± 0.058 | 0.933 ± 0.029 |
Notice: Isolation Forest numbers from Article 1 for reference. All different fashions evaluated with an identical splits, thresholds, and seeds.
The rule learner sits barely beneath the pure neural baseline on all three detection metrics, roughly 1.5 F1 factors on common. The tradeoff is explainability. The per-seed breakdown reveals the complete image:
| Seed | NN F1 | RL F1 | NN ROC | RL ROC | Constancy | Protection |
|---|---|---|---|---|---|---|
| 42 | 0.818 | 0.824 | 0.9607 | 0.9681 | 0.9921 | 0.8243 |
| 0 | 0.825 | 0.832 | 0.9727 | 0.9572 | 0.9925 | 0.8514 |
| 7 | 0.779 | 0.776 | 0.9272 | 0.9001 | 0.9955 | 0.7568 |
| 123 | 0.817 | 0.755 | 0.9483 | 0.8974 | 0.9922 | 0.8108 |
| 2024 | 0.779 | 0.759 | 0.9223 | 0.9416 | 0.9946 | 0.8108 |
In seeds 42 and 0, the rule learner exceeds the pure neural baseline on F1. In seed 2024, it exceeds on ROC-AUC. The efficiency variance throughout seeds is the sincere image of what gradient-based rule induction produces on a 0.17% imbalanced dataset.
Rule High quality — The New Contribution
Three metrics, Every solutions a distinct query a compliance officer would ask.
Rule Constancy — can I belief this rule set to symbolize the mannequin’s precise choices?
def rule_fidelity(mlp_probs, rule_probs, threshold=0.5):
mlp_preds = (mlp_probs > threshold).astype(int)
rule_preds = (rule_probs > threshold).astype(int)
return (mlp_preds == rule_preds).imply()
Rule Protection — what fraction of precise fraud does a minimum of one rule catch?
def rule_coverage(rule_acts, y_true, threshold=0.5):
any_rule_fired = (rule_acts > threshold).any(axis=1)
return any_rule_fired[y_true == 1].imply()
Rule Simplicity — what number of distinctive characteristic circumstances per rule, after deduplication?
def rule_simplicity(rule_weights_numpy, weight_threshold=0.50):
# Divide by n_thresholds (=3) to get distinctive options,
# the significant readability metric. Goal: < 8.
lively = (np.abs(rule_weights_numpy) > weight_threshold).sum(axis=1)
unique_features = np.ceil(lively / 3.0)
unique_features = unique_features[unique_features > 0]
return float(unique_features.imply()) if len(unique_features) > 0 else 0.0
| Metric | imply ± std | Goal | Standing |
|---|---|---|---|
| Constancy | 0.993 ± 0.001 | > 0.85 | Wonderful |
| Protection | 0.811 ± 0.031 | > 0.70 | Good |
| Simplicity (distinctive options/rule) | 1.7 ± 2.1 | < 8 | The imply is dominated by three seeds the place the rule path collapsed completely (simplicity=0); within the two lively seeds, guidelines used 5 and eight circumstances — comfortably readable. |
| α (last) | 0.880 ± 0.045 | — | MLP dominant |
This highlights an actual pressure in differentiable rule studying: robust sparsity regularization produces clear guidelines after they seem, however could cause the symbolic path to go darkish in some initializations. Reporting imply ± std throughout seeds relatively than cherry-picking the most effective seed is crucial exactly due to this variance.
Constancy at 0.993 signifies that in seeds the place guidelines are lively, they agree with the MLP on 99.3% of binary choices — the consistency loss working precisely as designed.

The Extracted Guidelines — What the Gradient Discovered

Each guidelines are proven in full on the high of this text. The quick model: seed 42 produced a good 5-condition rule (conf=0.95), seed 7 a broader 8-condition rule (conf=0.74). In each, V14 < −1.5σ (or −1.6σ) seems because the main situation.
The cross-seed characteristic evaluation confirms the sample throughout all 5 seeds:
| Function | Seems in | Imply weighted rating |
|---|---|---|
| V14 | 2/5 seeds | 0.630 |
| V11 | 2/5 seeds | 0.556 |
| V12 | 2/5 seeds | 0.553 |
| V10 | 2/5 seeds | 0.511 |
| V4 | 1/5 seeds | 0.616 |
| V17 | 1/5 seeds | 0.485 |
Even with solely two seeds producing seen guidelines, V14 ranked first or second in each — a statistically placing convergence given zero prior characteristic steerage. The mannequin didn’t must be informed what to search for.
“The mannequin acquired 30 anonymized options and a gradient sign. It discovered V14 anyway.”
What the Mannequin Discovered — and Why It Makes Sense
V14 is one among 28 PCA elements extracted from anonymized bank card transaction information. Precisely what it represents isn’t public information — that’s the level of the anonymization. What a number of unbiased analyses have established is that V14 has the very best absolute correlation with the fraud label of any characteristic within the dataset.
Why did the rule learner discover it? The mechanism is the consistency loss. By coaching guidelines to agree with the MLP’s assured predictions, the rule learner is studying the MLP’s inner representations and translating them into symbolic kind. The MLP had already realized from the labels that V14 was vital. The consistency loss transferred that sign into the rule weight matrix. Temperature annealing then hardened that weight right into a crisp threshold situation.
That is the basic distinction between Rule Injection (Article 1) and Rule Studying (this text). Rule injection encodes what you already know. Rule studying discovers what you don’t. On this experiment, the invention was V14 — a sign the gradient discovered independently, with out being informed to search for it.
Throughout 5 seeds, readable guidelines emerged in two — persistently highlighting V14. That could be a highly effective demonstration that gradient descent can rediscover domain-critical alerts with out being informed to search for them.

A compliance workforce can now learn Rule 1, confirm that V14 < −1.5σ makes area sense, and log out on it — with out opening a single weight matrix. That’s what neuro-symbolic rule studying is for.
4 Issues to Watch Earlier than Deploying This
- Annealing velocity is your most delicate hyperparameter Too quick: guidelines crystallize earlier than the MLP has realized something — you get crisp nonsense. Too gradual: τ by no means falls low sufficient and guidelines keep gentle. Deal with τ_end as the primary parameter to tune on a brand new dataset.
- n_rules units your interpretability funds Above 8–10 guidelines, you may have a lookup desk, not an auditable rule set. Beneath 4, you might miss tail fraud patterns. The candy spot for compliance use is 4–8 guidelines.
- The consistency threshold assumes a calibrated MLP In case your base MLP is poorly calibrated — frequent on severely imbalanced information — the masks fires too not often. Run a calibration plot on validation outputs. Contemplate Platt scaling if calibration is poor.
- Discovered guidelines want auditing after each retrain In contrast to frozen hand-coded guidelines, realized guidelines replace every time the mannequin retrains. The compliance workforce can’t log out as soon as and stroll away — the sign-off should occur each retrain cycle.
Rule Injection vs. Rule Studying — When to Use Which
| State of affairs | Use |
|---|---|
| Robust area information, steady fraud patterns | Rule Injection (Article 1) |
| Unknown or shifting fraud patterns | Rule Studying (this text) |
| Compliance requires auditable, readable guidelines | Rule Studying |
| Quick experiment, minimal engineering overhead | Rule Injection |
| Finish-to-end interpretability pipeline | Rule Studying |
| Small dataset (<10k samples) | Rule Injection — consistency loss wants sign |
The rule learner provides roughly 200 strains of code and a hyperparameter sweep. It’s not free. On very small datasets, the consistency loss could not accumulate sufficient sign to be taught significant guidelines — validate constancy earlier than treating extracted guidelines as authoritative. The method is a software, not an answer.
One sincere commentary from the five-seed experiment: in 3 of 5 seeds, robust sparsity strain drove all rule weights beneath the extraction threshold. The mannequin converged to the precise detection reply however expressed it purely by means of the MLP path. This variance is actual. Single-seed outcomes would give a misleadingly clear image — which is why multi-seed analysis is non-negotiable for any paper or article making claims about realized rule habits.
The subsequent query on this sequence is whether or not these extracted guidelines can flag idea drift — detecting when fraud patterns have shifted sufficient that the principles want updating earlier than mannequin efficiency degrades. When V14’s significance drops within the rule weights whereas detection metrics maintain regular, the fraud distribution could also be altering. That early warning sign is the topic of the subsequent article.
Disclosure
This text relies on unbiased experiments utilizing publicly obtainable information (Kaggle Credit score Card Fraud dataset, CC-0 Public Domain) and open-source instruments (PyTorch, scikit-learn). No proprietary datasets, firm sources, or confidential data have been used. The outcomes and code are absolutely reproducible as described, and the GitHub repository accommodates the entire implementation. The views and conclusions expressed listed here are my very own and don’t symbolize any employer or group.
References
[1] Evans, R., & Grefenstette, E. (2018). Studying Explanatory Guidelines from Noisy Information. JAIR, 61, 1–64. https://arxiv.org/abs/1711.04574
[2] Wolfson, B., & Acar, E. (2024). Differentiable Inductive Logic Programming for Fraud Detection. arXiv preprint arXiv:2410.21928. https://arxiv.org/abs/2410.21928
[3] Martins, J. L., Bravo, J., Gomes, A. S., Soares, C., & Bizarro, P. (2024). RIFF: Inducing Guidelines for Fraud Detection from Determination Timber. RuleML+RR 2024. arXiv:2408.12989. https://arxiv.org/abs/2408.12989
[4] Xu, S., Walter, N. P., & Vreeken, J. (2024). Neuro-Symbolic Rule Lists. arXiv preprint arXiv:2411.06428. https://arxiv.org/abs/2411.06428
[5] Kusters, R., Kim, Y., Collery, M., de Sainte Marie, C., & Gupta, S. (2022). Differentiable Rule Induction with Discovered Relational Options. arXiv preprint arXiv:2201.06515. https://arxiv.org/abs/2201.06515
[6] Dal Pozzolo, A. et al. (2015). Calibrating Chance with Undersampling for Unbalanced Classification. IEEE SSCI. Dataset: https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud (CC-0)
[7] Alexander, E. P. (2026). Hybrid Neuro-Symbolic Fraud Detection. In the direction of Information Science. https://towardsdatascience.com/hybrid-neuro-symbolic-fraud-detection-guiding-neural-networks-with-domain-rules/
[8] Liu, F. T., Ting, Ok. M., & Zhou, Z.-H. (2008). Isolation Forest. In 2008 Eighth IEEE Worldwide Convention on Information Mining (ICDM), pp. 413–422. IEEE. https://doi.org/10.1109/ICDM.2008.17
[9] Paszke, A. et al. (2019). PyTorch. NeurIPS 32. https://pytorch.org
[10] Pedregosa, F. et al. (2011). Scikit-learn: Machine Studying in Python. JMLR, 12, 2825–2830. https://scikit-learn.org
Code: github.com/Emmimal/neuro-symbolic-ai-fraud-pytorch
Earlier article: Hybrid Neuro-Symbolic Fraud Detection: Guiding Neural Networks with Domain Rules
