Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 52 additions & 1 deletion main.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,50 @@ async def handler(data):
else:
logger.warning("📥 Received Block #%s — rejected", block.index)

elif msg_type == "status":
import json as _json
peer_height = payload["height"]
my_height = chain.height

if peer_height > my_height:
writer = data.get("_writer")
if writer:
request = _json.dumps({
"type": "get_blocks",
"data": {
"from_height": my_height + 1,
"to_height": peer_height
}
}) + "\n"
writer.write(request.encode())
await writer.drain()
logger.info("📡 Requesting blocks %d~%d from %s",
my_height + 1, peer_height, peer_addr)
elif msg_type == "get_blocks":
import json as _json
from_h = payload["from_height"]
to_h = payload["to_height"]
blocks = chain.get_blocks_range(from_h, to_h)

writer = data.get("_writer")
if writer and blocks:
response = _json.dumps({
"type": "blocks",
"data": {"blocks": blocks}
}) + "\n"
writer.write(response.encode())
await writer.drain()
logger.info("📤 Sent %d blocks to %s", len(blocks), peer_addr)

elif msg_type == "blocks":
received = payload["blocks"]
success, count = chain.add_blocks_bulk(received)

if success:
logger.info("✅ Chain synced: added %d blocks", count)
else:
logger.warning("❌ Chain sync failed after %d blocks", count)
Comment on lines +159 to +201
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

History import assumes a state snapshot invariant that sync does not guarantee.

add_blocks_bulk() never replays transactions, so this path is only safe when the local state already matches the peer tip. But the sync branch above can reject the snapshot on Lines 120-122 and, when accepted, only merges missing accounts on Lines 130-136. The default --fund path already creates local state before connect, so a node can append historical blocks and still keep non-canonical balances/nonces. Either make initial sync replace local state atomically, or make bulk history import rebuild state instead of relying on sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@main.py` around lines 159 - 201, The issue: importing historical blocks via
the "blocks" message calls chain.add_blocks_bulk(received) which does not replay
transactions, so local state can diverge unless the node's state already matches
the peer tip; update the code and chain logic so bulk imports produce a correct
state: either (A) make the initial sync path (the code path that calls sync and
merges accounts) perform an atomic state replace when accepting a peer snapshot
(ensure sync returns a flag and you call a state_replace() /
chain.replace_state_atomic(...) before applying blocks), or (B) change
add_blocks_bulk (and/or introduce add_blocks_and_replay or
rebuild_state_from_history) to rebuild state by replaying transactions from the
earliest imported block (i.e., derive balances/nonces by applying each block's
transactions rather than only appending block headers). Locate references to
add_blocks_bulk, sync, chain.get_blocks_range, and the "blocks" message handler
and implement one of these fixes so historical imports do not rely on
preexisting local state.


return handler


Expand Down Expand Up @@ -324,7 +368,14 @@ async def on_peer_connected(writer):
}) + "\n"
writer.write(sync_msg.encode())
await writer.drain()
logger.info("🔄 Sent state sync to new peer")

status_msg = _json.dumps({
"type": "status",
"data": {"height": chain.height}
}) + "\n"
writer.write(status_msg.encode())
await writer.drain()
logger.info("🔄 Sent state sync and status to new peer")
Comment on lines +372 to +378
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Capture sync and status from the same tip.

This status payload is built after the earlier await writer.drain(). If the local chain advances during that gap, the peer can receive accounts from height H and a status for H+1, then request one more block than the state snapshot represents. Snapshot both values before the first await.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@main.py` around lines 372 - 378, Capture the chain tip once before sending
any data so the state snapshot and the status message use the same height: read
chain.height (and the sync snapshot value used earlier) into local variables
(e.g., snapshot_height and status_height) before calling writer.write or
awaiting writer.drain, build the status_msg using that captured height instead
of accessing chain.height again, and then write/drain and log as before; update
references to status_msg, writer.write, writer.drain, and any earlier sync
variable to use these captured variables so both payloads reflect the same tip.


network.register_on_peer_connected(on_peer_connected)

Expand Down
29 changes: 29 additions & 0 deletions minichain/chain.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,12 @@ def last_block(self):
with self._lock: # Acquire lock for thread-safe access
return self.chain[-1]

@property
def height(self) -> int:
"""Returns the current chain height (genesis = 0)"""
with self._lock:
return len(self.chain) - 1

def add_block(self, block):
"""
Validates and adds a block to the chain if all transactions succeed.
Expand Down Expand Up @@ -82,3 +88,26 @@ def add_block(self, block):
self.state = temp_state
self.chain.append(block)
return True

def get_blocks_range(self, from_height: int, to_height: int) -> list:
"""Return serialized blocks from from_height to to_height inclusive."""
with self._lock:
to_height = min(to_height, len(self.chain) - 1)
if from_height > to_height or from_height < 0:
return []
return [b.to_dict() for b in self.chain[from_height:to_height + 1]]

def add_blocks_bulk(self, block_dicts: list) -> tuple:
"""Add blocks validating chain linkage only. State relies on sync message."""
added = 0
for block_dict in block_dicts:
block = Block.from_dict(block_dict)
with self._lock:
try:
validate_block_link_and_hash(self.last_block, block)
except ValueError as exc:
logger.warning("Block %s rejected: %s", block.index, exc)
return False, added
self.chain.append(block)
added += 1
return True, added
Comment on lines +100 to +113
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make add_blocks_bulk() all-or-nothing.

A failure on a later element returns (False, added) after the earlier blocks were already appended. That leaves the node at an intermediate height with no rollback path, and any retry now starts from the wrong tip. Validate the full batch against a temporary tip first, then extend self.chain once.

🔧 Proposed fix
     def add_blocks_bulk(self, block_dicts: list) -> tuple:
         """Add blocks validating chain linkage only. State relies on sync message."""
-        added = 0
-        for block_dict in block_dicts:
-            block = Block.from_dict(block_dict)
-            with self._lock:
-                try:
-                    validate_block_link_and_hash(self.last_block, block)
-                except ValueError as exc:
-                    logger.warning("Block %s rejected: %s", block.index, exc)
-                    return False, added
-                self.chain.append(block)
-                added += 1
-        return True, added
+        with self._lock:
+            tip = self.chain[-1]
+            new_blocks = []
+
+            for block_dict in block_dicts:
+                block = Block.from_dict(block_dict)
+                try:
+                    validate_block_link_and_hash(tip, block)
+                except ValueError as exc:
+                    logger.warning("Block %s rejected: %s", block.index, exc)
+                    return False, 0
+                new_blocks.append(block)
+                tip = block
+
+            self.chain.extend(new_blocks)
+            return True, len(new_blocks)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@minichain/chain.py` around lines 100 - 113, The add_blocks_bulk function must
be atomic: first convert each block_dict to Block (using Block.from_dict) and
validate linkage sequentially against a temporary tip via
validate_block_link_and_hash without mutating self.chain or self.last_block; if
any validation fails return (False, 0). Only after the entire batch validates,
acquire self._lock and append all validated Block instances to self.chain
(updating added appropriately) so the extension is done under the lock as one
operation; reference add_blocks_bulk, Block.from_dict,
validate_block_link_and_hash, self.last_block, self.chain and self._lock when
making the change.

37 changes: 36 additions & 1 deletion minichain/p2p.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
logger = logging.getLogger(__name__)

TOPIC = "minichain-global"
SUPPORTED_MESSAGE_TYPES = {"sync", "tx", "block"}
SUPPORTED_MESSAGE_TYPES = {"sync", "tx", "block", "status", "get_blocks", "blocks"}


class P2PNetwork:
Expand Down Expand Up @@ -207,6 +207,37 @@ def _validate_block_payload(self, payload):
for tx_payload in payload["transactions"]
)

def _validate_status_payload(self, payload):
if not isinstance(payload, dict):
return False
if set(payload) != {"height"}:
return False
if not isinstance(payload["height"], int) or payload["height"] < 0:
return False
return True

def _validate_get_blocks_payload(self, payload):
if not isinstance(payload, dict):
return False
if set(payload) != {"from_height", "to_height"}:
return False
fh, th = payload.get("from_height"), payload.get("to_height")
if not isinstance(fh, int) or not isinstance(th, int):
return False
if fh < 0 or fh > th:
return False
return True

def _validate_blocks_payload(self, payload):
if not isinstance(payload, dict):
return False
if set(payload) != {"blocks"}:
return False
blocks = payload.get("blocks")
if not isinstance(blocks, list):
return False
return all(isinstance(b, dict) for b in blocks)
Comment on lines +231 to +239
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate block entries inside blocks batches.

This currently accepts any list[dict]. A payload like {"blocks": [{"foo": 1}]} passes here and then raises later when Block.from_dict() indexes required keys. Reject malformed block dicts at the P2P boundary instead.

🔧 Proposed fix
     def _validate_blocks_payload(self, payload):
         if not isinstance(payload, dict):
             return False
         if set(payload) != {"blocks"}:
             return False
         blocks = payload.get("blocks")
         if not isinstance(blocks, list):
             return False
-        return all(isinstance(b, dict) for b in blocks)
+        return all(self._validate_block_payload(block) for block in blocks)
🧰 Tools
🪛 Ruff (0.15.7)

[warning] 231-231: Missing return type annotation for private function _validate_blocks_payload

(ANN202)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@minichain/p2p.py` around lines 231 - 239, The _validate_blocks_payload
currently only checks for dicts but lets malformed block dicts (e.g. {"foo":1})
through; update _validate_blocks_payload to iterate payload["blocks"] and for
each entry ensure it's a dict and contains the required keys expected by
Block.from_dict (e.g. the canonical block field names used by Block.from_dict or
a Block.REQUIRED_FIELDS constant if present) and basic types where applicable,
returning False if any block is missing keys or has wrong types; reference
_validate_blocks_payload and Block.from_dict when making the check so the P2P
boundary rejects malformed block entries before attempting to construct Block
instances.


def _validate_message(self, message):
if not isinstance(message, dict):
return False
Expand All @@ -226,6 +257,9 @@ def _validate_message(self, message):
"sync": self._validate_sync_payload,
"tx": self._validate_transaction_payload,
"block": self._validate_block_payload,
"status": self._validate_status_payload,
"get_blocks": self._validate_get_blocks_payload,
"blocks": self._validate_blocks_payload,
}
return validators[msg_type](payload)

Expand Down Expand Up @@ -283,6 +317,7 @@ async def _listen_to_peer(
continue
self._mark_seen(msg_type, payload)
data["_peer_addr"] = addr
data["_writer"] = writer

if self._handler_callback:
try:
Expand Down
Loading