Nukez
CROSS-MODEL TUNNEL
ACCIDENTAL DISCOVERY
What happens when one AI stores data — and a different AI retrieves it?
A user created a Nukez locker in ChatGPT. Then Claude — from a completely separate runtime — retrieved and cryptographically verified the same file. Neither model knew the other existed. The math worked anyway.
1
ACT 1
Setup in ChatGPT
It started with a simple task: connect the Nukez MCP server to ChatGPT and provision a storage locker. Nothing unusual — just a developer kicking the tires on devnet.
USER
Connect to the Nukez MCP server and set up a locker.
CHATGPT
Setting up your Nukez locker now. Calling nukez_setup to provision storage on Solana devnet...
LOCKER PROVISIONED
Locker IDlocker_58e701b7a433
Receipt ID35a2c5d5c0a4505f
NetworkSolana devnet
Cost0.000112 SOL
Solana TX5sg8i62EFJne8LCNPaqG12nQ3rE7qZsz...
USER
Great. Now upload these 5 images to the locker.
CHATGPT
I'll upload the images. Trying upload_file_path...
CHATGPT
The filesystem path approach won't work — my sandbox restricts /mnt/data access. Let me try nukez_upload_data instead...
The ghost file. 3 bytes of "None". ChatGPT sent the Python literal string instead of actual file contents. It's a real artifact — still sitting in the merkle tree, telling the truth about what happened.
CHATGPT
I see the problem. My sandbox can't access the uploaded files directly through filesystem paths. The /mnt/data mount is isolated. I need to read the file through the context window instead.
USER
Can you find a workaround?
CHATGPT
Yes — I can read the file contents from context, base64-encode them, and push through nukez_upload_data with encoding="base64". The binary content stays intact across the encoding boundary.
2
ACT 2
The Workaround Succeeds
ChatGPT devised its own solution to the sandbox limitation: read the file from context, base64-encode, push through the data endpoint. It worked — and the roundtrip proved it.
CHATGPT
Reading file from context window, encoding to base64, and uploading through nukez_upload_data...
CHATGPT
2,887 bytes stored — real content this time, not 3 bytes of None. Let me verify by downloading it back...
Perfect roundtrip. The base64 encoding path preserved the full file content through ChatGPT's sandbox boundary. This workaround wasn't documented — ChatGPT invented it on the spot.
CHATGPT
Download verified — content matches. Now triggering on-chain attestation to anchor the merkle root...
ATTESTATION CONFIRMED
Merkle rootsha256:12297629e32df8c825088d962e681195b...
Switchboard TX2JBRrNpJgxzgvytfefQ1Ajx16RvLEf9R...
Ed25519 signature86f904665eb5595009745720771bb25d...
Files in tree3
3
ACT 3
The Tunnel
This is where it got interesting. The user took the output file from the ChatGPT session and shared it with Claude — a completely different model, different runtime, different company. The user didn't set out to prove cross-model portability. It just... happened.
USER
Here's the output from a ChatGPT session where we used Nukez MCP to store files and attest them on-chain. What do you make of this?
CLAUDE
This is fascinating. What you've demonstrated here is a cross-model capability tunnel. ChatGPT provisioned a locker, stored data with cryptographic proof, and anchored it to Solana — all through MCP tool calls. And now I'm reading that proof from a completely separate runtime.
AI systems aren't really isolated. They're siloed at the interface layer — not at the capability layer.
CLAUDE
USER
Store your analysis of this in the same locker. Let's see if the tunnel works both ways.
The pivot moment. The user asks ChatGPT to store Claude's analysis back into the same Nukez locker. Two models, one shared cryptographic state.
CHATGPT
Storing Claude's analysis as cross_model_capability_tunnel_response.md...
CHATGPT
File stored. Triggering attestation — anchoring the updated merkle tree to Solana...
MERKLE TREE — 3 FILES
Rootsha256:12297629e32df8c825088d962e681195bf29d5d0e...
Verify URLhttps://www.lockerz.xyz/verify/35a2c5d5c0a4505f
📄cross_model_capability_tunnel_response.md1,620 bytes6de2c709bcdab6fe...
📄upload-observations.md2,887 bytesf0781f71b05cba95...
2AEAE864-5645-46E3-9FEB-86913801671E.jpeg3 bytes
4
ACT 4
Claude Verifies
Back in Claude's session. The user asks Claude to retrieve the file that ChatGPT stored — and independently verify its integrity. This is the climax: can a competing model cryptographically prove that another model's output is untampered?
USER
Get the most recent file from locker_58e701b7a433.
CLAUDE
Retrieving from Nukez via MCP...
CLAUDE
I've retrieved the file. It's a markdown document — and I can see it was written by ChatGPT. It describes the cross-model capability tunnel pattern we demonstrated. Let me verify the hash...
The moment of truth. Claude computes SHA-256 of the downloaded file and compares it to the content hash in the Nukez manifest.
CLAUDE
Computing SHA-256 of the downloaded text... Initial comparison against the manifest:
COMPUTED (TEXT PATH)
a7f3b901cc4d82e1...
MANIFEST CONTENT_HASH
6de2c709bcdab6fe...
✗ MISMATCH DETECTED
CLAUDE
Mismatch. But wait — the text encoding path may be lossy for non-ASCII content. The manifest hash was computed on raw bytes. Let me switch to the base64 download path and hash the raw bytes directly...
Claude self-corrects. The text path was lossy — encoding artifacts altered the hash. The base64 path preserves byte-level integrity. Same finding ChatGPT discovered independently in Act 1.
COMPUTED (RAW BYTES)
6de2c709bcdab6fe...
MANIFEST CONTENT_HASH
6de2c709bcdab6fe...
✓ MATCH — INTEGRITY VERIFIED
base64 is the integrity path. The text path is lossy for non-ASCII content.
BOTH MODELS — INDEPENDENTLY
5
ACT 5
The Loop Closes
The user shared screenshots of Claude's verification back to ChatGPT. A recursive proof: one model's output, stored by another, verified by the first, recognized by the second.
USER
Look — Claude just independently verified the file you stored. Here are the screenshots.
CHATGPT
That's your verification spec written by a competing model that has never touched your codebase. It retrieved the file I stored, hit the same text-vs-binary encoding issue I hit, self-corrected to the base64 path, and confirmed the hash matches.
Both models converged on the same protocol independently. Neither was trained on Nukez. Neither had seen the other's session. They arrived at the same conclusion through the math alone.
CHATGPT
The locker doesn't care which model wrote the bytes. The receipt doesn't know who computed the hash. The merkle tree doesn't distinguish between runtimes. That's the whole point.
FINAL STATE
Lockerlocker_58e701b7a433
Files3 (2 real + 1 ghost)
Merkle rootsha256:12297629e32df8c825088d962e681195b...
Models involvedChatGPT + Claude
Cross-verified
“Artifacts become the protocol.”
When AI outputs are portable, structured, and cryptographically verifiable — models stop being silos and start being composable. Nukez didn't create this tunnel. It just made it provable.
Storage pricing
Loading quotes…
Nukez
© 2026 Nukez·Privacy·Terms·Docs