Luredoor · KAP

Security & Traps

Designing phishing traps and scam simulators as a cognitive design problem to understand human vulnerabilities and build inherently safe AI systems.

Cognitive State: Active - Reflection Engine Running
Cognitive Threat Analysis
Attack Vector Simulation
Educational Security Tools
Human Vulnerability Mapping

Cognitive Threat Analysis Lab

A research environment for studying security as a cognitive design problem by simulating and deconstructing attack vectors.

The KAP Trap Simulator

The lab's primary instrument is the KAP (Knowledge, Action, Prevention) Trap Simulator. It models common social engineering and crypto attacks to analyze the cognitive biases they exploit.

Trap SimulationCognitive TriggersDescription
Fake Airdrop SimulationGreed, Urgency (FOMO), Social ProofSimulates an exclusive airdrop, prompting users to sign a malicious transaction to claim non-existent assets.
Malicious Token ApprovalTrust Violation, Technical ObfuscationMimics a legitimate DeFi action, requesting a broad token approval that would grant a malicious contract control over user funds.
Impersonation Wallet InterfaceAuthority Bias, Pattern InterruptionPresents a UI nearly identical to a trusted wallet provider, designed to capture a seed phrase or password through a fake login prompt.

Luredoor Case Files

The "Luredoor" project documents findings from these simulations:

  • Case #001: Cracked Software: Demonstrated a user's willingness to bypass security for perceived value, quantifying the cognitive override of the "free" tag.
  • Case #002: Jupiter NFT: Showed that urgency (FOMO) and social proof can systematically disrupt rational due diligence.

System Readout: AIXSELF Alignment

This adversarial research directly informs the AIXSELF safety architecture. By studying cognitive misalignment, we gather ground-truth data to build systems that are architecturally aligned. The goal is to design AI that is not just secure, but immune to the cognitive exploits that plague human systems.