Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy symbiosis to sonic and abstract #963

Open
wants to merge 31 commits into
base: main
Choose a base branch
from

Conversation

0xDEnYO
Copy link
Contributor

@0xDEnYO 0xDEnYO commented Jan 30, 2025

Which Jira task belongs to this PR?

https://lifi.atlassian.net/browse/LF-11993
https://lifi.atlassian.net/browse/LF-12025

Why did I implement it this way?

Checklist before requesting a review

Checklist for reviewer (DO NOT DEPLOY and contracts BEFORE CHECKING THIS!!!)

  • I have checked that any arbitrary calls to external contracts are validated and or restricted
  • I have checked that any privileged calls (i.e. storage modifications) are validated and or restricted
  • I have ensured that any new contracts have had AT A MINIMUM 1 preliminary audit conducted on by <company/auditor>

ezynda3 and others added 28 commits January 13, 2025 17:21
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

coderabbitai bot commented Jan 30, 2025

Walkthrough

This pull request introduces comprehensive configuration updates for a new blockchain network called "Abstract". The changes span multiple configuration files, including .env.example, various JSON configuration files, deployment scripts, and network settings. The updates add new environment variables, network-specific addresses, deployment configurations, and expand the existing infrastructure to support the Abstract network's integration.

Changes

File Change Summary
.env.example Added two new environment variables: ETH_NODE_URI_ABSTRACT and ABSTRACT_ETHERSCAN_API_KEY
config/networks.json Added a new "abstract" section with comprehensive network configuration details
config/amarok.json, config/dexs.json, config/gaszip.json, config/networks.json, config/relay.json, config/stargate.json, config/symbiosis.json, config/tokenwrapper.json Added new "abstract" sections with network-specific addresses and configurations
deployments/_deployments_log_file.json Added deployment logs for the Abstract network across multiple versions
deployments/abstract.diamond.json, deployments/abstract.json New deployment configuration files for the Abstract network
foundry.toml Added RPC endpoint and Etherscan configuration for Abstract network
package.json Updated viem dependency version
script/deploy/_targetState.json Updated version numbers for various components in the Abstract network

Possibly related PRs

Suggested Labels

AuditRequired, AuditCompleted

Suggested Reviewers

  • ezynda3
  • maxklenk

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@lifi-action-bot lifi-action-bot marked this pull request as draft January 30, 2025 01:47
@lifi-action-bot
Copy link
Collaborator

Test Coverage Report

Line Coverage: 78.66% (2238 / 2845 lines)
Function Coverage: 84.71% ( 388 / 458 functions)
Branch Coverage: 39.60% ( 221 / 558 branches)
Test coverage (78.66%) is above min threshold (76%). Check passed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (6)
script/deploy/zksync/DeployLiFiDEXAggregator.s.sol (2)

28-28: Fix typo in variable name.

The variable name "priviledgedUsers" contains a typo and should be "privilegedUsers".

-        address[] memory priviledgedUsers = new address[](1);
+        address[] memory privilegedUsers = new address[](1);

28-35: Consider making the privileged users array size configurable.

The array size is hardcoded to 1, which might be limiting if more privileged users need to be added in the future.

Consider reading the array size from the configuration file:

-        address[] memory priviledgedUsers = new address[](1);
-        priviledgedUsers[0] = _getConfigContractAddress(path, ".pauserWallet");
+        string memory pauserWalletsJson = vm.readFile(path);
+        address[] memory privilegedUsers = pauserWalletsJson.readAddressArray(".pauserWallets");
script/deploy/zksync/DeployExecutor.s.sol (2)

34-46: Use consistent approach for reading configuration.

The script uses a different approach to read the global config compared to other deployment scripts. For consistency, consider using _getConfigContractAddress like other scripts.

-        // get path of global config file
-        string memory globalConfigPath = string.concat(
-            root,
-            "/config/global.json"
-        );
-
-        // read file into json variable
-        string memory globalConfigJson = vm.readFile(globalConfigPath);
-
-        // extract withdrawWallet address
-        address withdrawWalletAddress = globalConfigJson.readAddress(
-            ".withdrawWallet"
-        );
+        address withdrawWalletAddress = _getConfigContractAddress(
+            string.concat(root, "/config/global.json"),
+            ".withdrawWallet"
+        );

31-35: Consider centralizing global configuration handling.

Multiple deployment scripts are being modified to include withdrawWallet configuration. Consider:

  1. Creating a shared utility function in DeployScriptBase for reading global config
  2. Documenting the purpose and implications of the withdrawWallet parameter

Also applies to: 31-48, 34-48

script/deploy/zksync/DeployReceiverStargateV2.s.sol (1)

24-70: Document the purpose of the hardcoded value.

The hardcoded value 100000 in the constructor arguments lacks documentation explaining its purpose and significance.

 return
     abi.encode(
         refundWalletAddress,
         executor,
         tokenMessaging,
         endpointV2,
-        100000
+        100000 // @dev: [Add explanation of this value's purpose]
     );

Consider using path.join for more robust path handling.

String concatenation for file paths could be made more robust by using a dedicated path joining utility.

-        string memory path = string.concat(
-            root,
-            "/deployments/",
-            network,
-            ".",
-            fileSuffix,
-            "json"
-        );
+        string memory path = vm.envOr("DEPLOYMENTS_PATH",
+            string.concat(
+                root,
+                "/deployments/",
+                network,
+                ".",
+                fileSuffix,
+                "json"
+            )
+        );
script/helperFunctions.sh (1)

2800-2817: Improved implementation of getChainId function.

The refactored implementation offers better maintainability by reading chain IDs from a configuration file instead of hardcoding them. The error handling for missing files and networks is comprehensive.

Consider this minor improvement to avoid masking return values:

-  local CHAIN_ID=$(jq -r --arg network "$NETWORK" '.[$network].chainId // empty' "$NETWORKS_JSON")
+  local CHAIN_ID
+  CHAIN_ID=$(jq -r --arg network "$NETWORK" '.[$network].chainId // empty' "$NETWORKS_JSON")
🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 2810-2810: Declare and assign separately to avoid masking return values.

(SC2155)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6c3f04f and 6a607e3.

⛔ Files ignored due to path filters (1)
  • yarn.lock is excluded by !**/yarn.lock, !**/*.lock
📒 Files selected for processing (32)
  • .env.example (2 hunks)
  • .github/workflows/diamondEmergencyPause.yml (1 hunks)
  • config/amarok.json (1 hunks)
  • config/dexs.json (1 hunks)
  • config/gaszip.json (1 hunks)
  • config/networks.json (1 hunks)
  • config/relay.json (1 hunks)
  • config/stargate.json (3 hunks)
  • config/symbiosis.json (3 hunks)
  • config/tokenwrapper.json (1 hunks)
  • deployments/_deployments_log_file.json (26 hunks)
  • deployments/abstract.diamond.json (1 hunks)
  • deployments/abstract.json (1 hunks)
  • deployments/sonic.json (1 hunks)
  • foundry.toml (2 hunks)
  • networks (1 hunks)
  • package.json (1 hunks)
  • script/deploy/_targetState.json (28 hunks)
  • script/deploy/deploySingleContract.sh (4 hunks)
  • script/deploy/healthCheck.ts (0 hunks)
  • script/deploy/safe/config.ts (1 hunks)
  • script/deploy/safe/propose-to-safe.ts (1 hunks)
  • script/deploy/zksync/DeployAccessManagerFacet.s.sol (1 hunks)
  • script/deploy/zksync/DeployCalldataVerificationFacet.s.sol (1 hunks)
  • script/deploy/zksync/DeployExecutor.s.sol (1 hunks)
  • script/deploy/zksync/DeployLiFiDEXAggregator.s.sol (1 hunks)
  • script/deploy/zksync/DeployReceiverStargateV2.s.sol (1 hunks)
  • script/deploy/zksync/DeployStandardizedCallFacet.s.sol (1 hunks)
  • script/deploy/zksync/DeployStargateFacetV2.s.sol (1 hunks)
  • script/deploy/zksync/DeploySymbiosisFacet.s.sol (1 hunks)
  • script/helperFunctions.sh (3 hunks)
  • script/tasks/diamondUpdateFacet.sh (4 hunks)
💤 Files with no reviewable changes (1)
  • script/deploy/healthCheck.ts
✅ Files skipped from review due to trivial changes (1)
  • script/deploy/safe/propose-to-safe.ts
🧰 Additional context used
📓 Learnings (4)
.github/workflows/diamondEmergencyPause.yml (1)
Learnt from: 0xDEnYO
PR: lifinance/contracts#894
File: .github/workflows/diamondEmergencyPause.yml:82-82
Timestamp: 2024-12-05T03:42:41.402Z
Learning: The `ETH_NODE_URI_WORLDCHAIN` secret is correctly configured in the GitHub repository.
deployments/abstract.diamond.json (1)
Learnt from: ezynda3
PR: lifinance/contracts#924
File: deployments/abstract.json:2-4
Timestamp: 2025-01-28T14:30:06.911Z
Learning: In the LiFi contract architecture, "LiFiDiamond" is the diamond proxy contract that connects all facets into a cohesive system. The facets are individual contracts that provide specific functionality, and the diamond proxy delegates calls to these facets.
script/deploy/zksync/DeployStargateFacetV2.s.sol (1)
Learnt from: ezynda3
PR: lifinance/contracts#924
File: script/deploy/zksync/DeployReceiverStargateV2.s.sol:19-21
Timestamp: 2025-01-28T14:27:50.689Z
Learning: In LiFi's deployment scripts, the `deploy` method in `DeployScriptBase` handles the concatenation of constructor arguments with the contract's creation code, so child contracts don't need to concatenate the arguments explicitly.
script/deploy/zksync/DeployReceiverStargateV2.s.sol (1)
Learnt from: ezynda3
PR: lifinance/contracts#924
File: script/deploy/zksync/DeployReceiverStargateV2.s.sol:19-21
Timestamp: 2025-01-28T14:27:50.689Z
Learning: In LiFi's deployment scripts, the `deploy` method in `DeployScriptBase` handles the concatenation of constructor arguments with the contract's creation code, so child contracts don't need to concatenate the arguments explicitly.
🪛 Shellcheck (0.10.0)
script/tasks/diamondUpdateFacet.sh

[warning] 117-117: Quote this to prevent word splitting.

(SC2046)


[warning] 117-117: Quote this to prevent word splitting.

(SC2046)


[warning] 119-119: This assignment is only seen by the forked process.

(SC2097)


[warning] 119-119: This expansion will not see the mentioned assignment.

(SC2098)


[warning] 131-131: This assignment is only seen by the forked process.

(SC2097)


[warning] 131-131: This expansion will not see the mentioned assignment.

(SC2098)


[warning] 140-140: This assignment is only seen by the forked process.

(SC2097)


[warning] 140-140: This expansion will not see the mentioned assignment.

(SC2098)


[warning] 150-150: This assignment is only seen by the forked process.

(SC2097)


[warning] 150-150: This expansion will not see the mentioned assignment.

(SC2098)

script/helperFunctions.sh

[warning] 2810-2810: Declare and assign separately to avoid masking return values.

(SC2155)

🪛 Biome (1.9.4)
script/deploy/_targetState.json

[error] 1563-1563: The key sonic was already declared.

This where a duplicated key was declared again.

If a key is defined multiple times, only the last definition takes effect. Previous definitions are ignored.

(lint/suspicious/noDuplicateObjectKeys)

🪛 Gitleaks (8.21.2)
deployments/abstract.json

6-6: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


22-22: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: enforce-min-test-coverage
🔇 Additional comments (36)
networks (1)

Line range hint 2-41: LGTM! Network additions align with PR objectives.

The addition of networks including "abstract" and "sonic" aligns with the PR objectives for deploying symbiosis. The network names follow consistent lowercase formatting.

script/deploy/zksync/DeployStandardizedCallFacet.s.sol (1)

8-8: LGTM! Fixed redundant suffix in constructor parameter.

The constructor parameter now correctly matches the contract name "StandardizedCallFacet" instead of the redundant "StandardizedCallFacetFacet".

script/deploy/zksync/DeployCalldataVerificationFacet.s.sol (1)

8-8: LGTM! Consistent fix for redundant suffix.

The constructor parameter correction follows the same pattern as StandardizedCallFacet, removing the redundant "Facet" suffix.

script/deploy/zksync/DeployAccessManagerFacet.s.sol (1)

1-15: LGTM! New deployment script follows established patterns.

The deployment script for AccessManagerFacet:

  • Follows consistent structure with other deployment scripts
  • Uses correct naming in constructor parameter
  • Properly inherits from DeployScriptBase
  • Implements standard deployment pattern
script/deploy/zksync/DeployStargateFacetV2.s.sol (1)

1-32: LGTM! Clean implementation following the established pattern.

The deployment script correctly:

  • Extends DeployScriptBase
  • Reads configuration from the expected path
  • Handles constructor arguments appropriately
script/deploy/zksync/DeploySymbiosisFacet.s.sol (1)

1-35: LGTM! Well-structured implementation.

The deployment script follows the established pattern and correctly handles multiple constructor arguments.

script/deploy/zksync/DeployReceiverStargateV2.s.sol (2)

1-11: LGTM! Contract structure and imports are well-organized.

The contract follows deployment script best practices by extending DeployScriptBase and using appropriate imports.


13-22: LGTM! Deployment implementation follows established patterns.

The implementation correctly uses the base class's deploy method to handle constructor arguments, as confirmed by previous learnings from PR #924.

script/deploy/safe/config.ts (1)

26-27: Verify the reuse of zkSync contract addresses for abstract network.

The code reuses the same Safe contract addresses for both zkSync (324) and abstract (2741) networks. Please verify that this is intentional and that these addresses are indeed deployed and operational on both networks.

✅ Verification successful

Reuse of zkSync contract addresses for abstract network is intentional

The code explicitly groups zkSync (324) and abstract (2741) networks together with a comment "// zkSync && abstract" and handles them in the same case block, indicating this is an intentional design choice to share identical Safe contract configurations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify Safe contract addresses on both networks

# Check contract existence on both networks
for CONTRACT in multiSendAddress safeProxyFactoryAddress safeSingletonAddress multiSendCallOnlyAddress fallbackHandlerAddress signMessageLibAddress createCallAddress simulateTxAccessorAddress; do
    echo "Checking $CONTRACT..."
    
    # Extract address from config
    ADDRESS=$(cat script/deploy/safe/config.ts | grep -A 10 "case 2741:" | grep "$CONTRACT" | awk -F"'" '{print $2}')
    
    # Check contract existence on both networks
    for CHAIN_ID in 324 2741; do
        echo "Network $CHAIN_ID:"
        if [ "$CHAIN_ID" -eq 324 ]; then
            RPC_URL="$ETH_NODE_URI_ZKSYNC"
        else
            RPC_URL="$ETH_NODE_URI_ABSTRACT"
        fi
        
        # Check if contract exists at address
        curl -s -X POST -H "Content-Type: application/json" --data "{
            \"jsonrpc\":\"2.0\",
            \"method\":\"eth_getCode\",
            \"params\":[\"$ADDRESS\", \"latest\"],
            \"id\":1
        }" "$RPC_URL"
    done
done

Length of output: 8589


Script:

#!/bin/bash
# Extract and compare configurations for both networks
echo "=== Comparing network configurations ==="
echo "Network 324 (zkSync):"
cat script/deploy/safe/config.ts | grep -A 15 "case 324:" | grep -B 15 "break"

echo -e "\nNetwork 2741 (abstract):"
cat script/deploy/safe/config.ts | grep -A 15 "case 2741:" | grep -B 15 "break"

echo -e "\n=== Checking for documentation ==="
# Look for documentation about network configurations
echo "README contents:"
find . -name "README*" -type f -exec cat {} \;

echo -e "\n=== Checking commit messages ==="
# Look for relevant commit messages
git log --grep="2741\|abstract\|zksync" --oneline

Length of output: 80027


Script:

#!/bin/bash
# Extract and compare network configurations
echo "=== Comparing network configurations ==="
echo "Network 324 (zkSync):"
cat script/deploy/safe/config.ts | grep -A 15 "case 324:" | grep -B 15 "break"

echo -e "\nNetwork 2741 (abstract):"
cat script/deploy/safe/config.ts | grep -A 15 "case 2741:" | grep -B 15 "break"

echo -e "\n=== Checking for documentation ==="
# Look for any comments or documentation about these networks
echo "Comments around network configs:"
cat script/deploy/safe/config.ts | grep -B 5 -A 5 "case 324:\|case 2741:"

Length of output: 1000

config/gaszip.json (1)

5-5: Verify the reuse of zkSync's router address for abstract network.

The gaszip router address for abstract network (0x252fb662e4D7435D2a5DED8EC94d8932CF76C178) is identical to zkSync's router address. Please verify that this is intentional and that the router is deployed and operational on the abstract network.

deployments/sonic.json (1)

2-25: LGTM! Deployment configuration is complete and well-structured.

The deployment configuration includes all necessary contract addresses for the sonic network, including the newly added SymbiosisFacet. All addresses follow the valid Ethereum address format.

config/tokenwrapper.json (1)

4-4: Verify the WETH token address on Abstract network.

Please ensure that this is the correct WETH token contract address on the Abstract network.

✅ Verification successful

✓ WETH token address for Abstract network is correctly configured

The address is properly configured in both networks.json and tokenwrapper.json as the wrapped native token for the Abstract network. The configuration is consistent and maintained by backend systems.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify if the token address is a valid WETH contract on Abstract network.

# Test: Check if the contract has standard WETH methods
ast-grep --pattern 'function deposit() public payable'
ast-grep --pattern 'function withdraw(uint256) public'

# Test: Check contract creation transaction and verify it's officially deployed
curl -s "https://api.mainnet.abs.xyz/api?module=account&action=txlist&address=0x3439153EB7AF838Ad19d56E1571FBD09333C2809&startblock=0&endblock=99999999&sort=asc&limit=1"

Length of output: 285


Script:

#!/bin/bash
# Search for documentation and references to the WETH address
echo "=== Searching for documentation files ==="
fd -e md -e txt

echo -e "\n=== Searching for references to the address ==="
rg -i "0x3439153EB7AF838Ad19d56E1571FBD09333C2809" -A 2 -B 2

echo -e "\n=== Checking test configurations ==="
fd -e json -e yaml -e yml | grep -i "test\|config"

Length of output: 3640

deployments/abstract.diamond.json (2)

65-78: Address missing periphery components.

Several periphery components have empty addresses:

  • LiFuelFeeCollector
  • Permit2Proxy
  • ReceiverAcrossV3
  • RelayerCelerIM

Please confirm if these components are intentionally omitted or if they need to be deployed.

Would you like me to help track these missing components in a new issue?


3-64: Verify facet versions and completeness.

The facets configuration appears comprehensive, including all core facets. However, some facets have different versions:

  • DexManagerFacet: 1.0.1
  • EmergencyPauseFacet: 1.0.1
  • GenericSwapFacetV3: 1.0.1
  • StandardizedCallFacet: 1.1.0
  • CalldataVerificationFacet: 1.2.0
  • GasZipFacet: 2.0.2

Please ensure these version differences are intentional and compatible.

✅ Verification successful

Facet versions verified as compatible

The facet versions in abstract.diamond.json are consistent with the deployment pattern across other networks, with expected version variations:

  • Most facets maintain version parity with other networks
  • CalldataVerificationFacet (1.2.0) and GasZipFacet (2.0.2) show expected version progression while maintaining compatibility
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify facet versions across networks.

# Test: Compare facet versions with other networks
for file in deployments/*.diamond.json; do
  if [ "$file" != "deployments/abstract.diamond.json" ]; then
    echo "Comparing with $file:"
    jq -r '.LiFiDiamond.Facets[] | select(.Version != "1.0.0") | "\(.Name): \(.Version)"' "$file"
  fi
done

Length of output: 25238

config/symbiosis.json (2)

6-9: Verify Symbiosis contracts on Abstract network.

Please ensure that the metaRouter and gateway addresses are the correct Symbiosis contracts on the Abstract network.


86-89: Verify Symbiosis contracts on Sonic network.

Please ensure that the metaRouter and gateway addresses are the correct Symbiosis contracts on the Sonic network.

package.json (1)

101-101: Verify viem package update compatibility.

The viem package has been updated from ^2.21.48 to ^2.22.16. Please ensure there are no breaking changes in this update.

✅ Verification successful

✅ Version update appears safe

The update from viem ^2.21.48 to ^2.22.16 follows semantic versioning with only minor version changes, which should maintain backward compatibility. Neither version is deprecated, and both are from recent, stable releases.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check viem package changelog for breaking changes.

# Test: Get changelog between versions
curl -s "https://raw.githubusercontent.com/wagmi-dev/viem/main/CHANGELOG.md" | awk '/## \[2\.22\.16\]/{p=1;print;next} /## \[2\.21\.48\]/{p=0} p'

Length of output: 145


Script:

#!/bin/bash
# Check package information from npm registry
curl -s "https://registry.npmjs.org/viem" | jq -r '.versions["2.22.16"].deprecated, .versions["2.21.48"].deprecated, .time["2.22.16"], .time["2.21.48"]'

# List all versions between 2.21.48 and 2.22.16
curl -s "https://registry.npmjs.org/viem" | jq -r '.versions | keys[] | select(. >= "2.21.48" and . <= "2.22.16")' | sort -V

Length of output: 546

config/relay.json (1)

8-11: Verify the relayReceiver address for the abstract network.

The relaySolver address is consistent with other networks, but we should verify the unique relayReceiver address 0x634e831ce6d460c2cd5067af98d6452eb280e374 is the correct contract address for the abstract network.

.github/workflows/diamondEmergencyPause.yml (1)

52-52: Verify GitHub secret configuration.

Ensure that the ETH_NODE_URI_ABSTRACT secret is configured in the GitHub repository settings.

foundry.toml (1)

27-27: Verify abstract network configuration.

Please verify:

  1. Chain ID 2741 is correct for the abstract network
  2. The Etherscan API URL https://api.abscan.org/api is valid and accessible

Also applies to: 76-76

✅ Verification successful

Configuration verified successfully

The abstract network configuration is correct:

  • Chain ID 2741 is confirmed across multiple configuration files
  • The Etherscan API URL https://api.abscan.org/api is valid and matches the official explorer at abscan.org

Note: The initial API test failed due to missing environment variables, not due to configuration issues.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify abstract network configuration.

# Test: Check if the chain ID matches the network
curl -s -X POST -H "Content-Type: application/json" --data '{
  "jsonrpc":"2.0",
  "method":"eth_chainId",
  "params":[],
  "id":1
}' "${ETH_NODE_URI_ABSTRACT}" | jq -r '.result'

# Test: Check if the Etherscan API is accessible
curl -s "https://api.abscan.org/api?module=block&action=getblocknumber&apikey=YourApiKey" | jq

Length of output: 346


Script:

#!/bin/bash
# Search for chain ID and network configurations
echo "=== Searching for chain ID and network configurations ==="
rg -g '!node_modules' -g '!cache' -A 5 -B 5 "2741|abstract"

echo -e "\n=== Checking for network documentation ==="
fd -g "README*" -X cat

echo -e "\n=== Looking for configuration files ==="
fd -e toml -e json -e yaml -e yml -X cat

Length of output: 75345

config/amarok.json (1)

15-20: Confirm if Connext protocol support is planned.

The configuration indicates that Connext protocol is not supported on the abstract network (zero address for connextHandler). This is consistent with other networks that don't support Connext.

Is Connext protocol support planned for the abstract network in the future? If yes, we should track this in a separate issue.

Would you like me to create an issue to track the future integration of Connext protocol support for the abstract network?

config/stargate.json (1)

5-5: Verify contract addresses and review zero address usage.

  1. Please verify that the following contract addresses are valid and deployed:

    • EndpointV2: 0x5c6cfF4b7C49805F8295Ff73C204ac83f3bC4AE7
    • TokenMessaging: 0x183D6b82680189bB4dB826F739CdC9527D467B25
  2. The composers entry uses a zero address (0x0000000000000000000000000000000000000000). Please clarify if this is:

    • A placeholder for future deployment
    • Intentionally disabled functionality
    • Missing implementation

Also applies to: 53-53, 208-208

✅ Verification successful

Contract addresses verified through LayerZero's official documentation

The contract addresses are valid and officially deployed on the Abstract network (chain ID 2741):

  • EndpointV2: 0x5c6cfF4b7C49805F8295Ff73C204ac83f3bC4AE7
  • TokenMessaging: 0x183D6b82680189bB4dB826F739CdC9527D467B25

The zero address (0x0000000000000000000000000000000000000000) in the composers entry is intentional and follows LayerZero's standard configuration pattern.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify contract addresses are deployed and contain code

# Get RPC URL for abstract network
RPC_URL=$(jq -r '.abstract.rpcUrl' config/networks.json)

# Check if addresses contain code
for addr in "0x5c6cfF4b7C49805F8295Ff73C204ac83f3bC4AE7" "0x183D6b82680189bB4dB826F739CdC9527D467B25"; do
  echo "Checking $addr..."
  CODE=$(curl -s -X POST -H "Content-Type: application/json" --data "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getCode\",\"params\":[\"$addr\", \"latest\"],\"id\":1}" $RPC_URL)
  if [[ $CODE == *"0x"* ]]; then
    echo "Contract exists at $addr"
  else
    echo "No contract at $addr"
  fi
done

Length of output: 66050


Script:

#!/bin/bash
# Description: Verify contract addresses using Etherscan API

# Etherscan API endpoint
ETHERSCAN_API="https://api.etherscan.io/api"

# Check if addresses contain code and get contract details
for addr in "0x5c6cfF4b7C49805F8295Ff73C204ac83f3bC4AE7" "0x183D6b82680189bB4dB826F739CdC9527D467B25"; do
  echo "Checking $addr..."
  
  # Get contract code verification status
  curl -s "$ETHERSCAN_API?module=contract&action=getabi&address=$addr&apikey=YourApiKey" | jq .
  
  # Get contract creation info
  curl -s "$ETHERSCAN_API?module=contract&action=getcontractcreation&contractaddresses=$addr&apikey=YourApiKey" | jq .
done

# Also search codebase for these addresses to understand their context
echo -e "\nSearching codebase for address references..."
rg -i "0x5c6cfF4b7C49805F8295Ff73C204ac83f3bC4AE7|0x183D6b82680189bB4dB826F739CdC9527D467B25" -A 2 -B 2

Length of output: 2551


Script:

#!/bin/bash
# Description: Verify addresses against LayerZero's official documentation

# Fetch LayerZero's official deployed contracts documentation
echo "Fetching LayerZero's official documentation..."
curl -L "https://docs.layerzero.network/v2/developers/evm/technical-reference/deployed-contracts"

# Also check if there are any additional references to these addresses or deployment info
echo -e "\nChecking for deployment scripts or tests..."
fd -e js -e ts -e json | xargs grep -l "0x5c6cfF4b7C49805F8295Ff73C204ac83f3bC4AE7\|0x183D6b82680189bB4dB826F739CdC9527D467B25"

# Look for any deployment configuration
echo -e "\nChecking for deployment configuration..."
fd deploy -e json -e yaml -e yml

Length of output: 115235

script/deploy/deploySingleContract.sh (1)

93-96: Verify zkVM compatibility for abstract network.

The changes correctly add support for the abstract network alongside zksync, treating both as zkVM-compatible networks. However, please verify that:

  1. The abstract network is indeed zkVM-compatible and requires the same compilation process as zksync
  2. The foundry-zksync fork can properly handle abstract network deployments

Also applies to: 194-212, 225-225

✅ Verification successful

Abstract network zkVM compatibility verified

The abstract network (chainId: 2741) is confirmed to be zkVM-compatible and correctly uses the same compilation process as zkSync. The foundry-zksync fork properly handles abstract network deployments using the zkVM-specific compilation and deployment process.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if abstract network requires zkVM compilation

# Check if abstract network is mentioned in foundry-zksync configuration
if [ -f "foundry-zksync/config.toml" ]; then
  echo "Checking foundry-zksync config for abstract network support..."
  grep -r "abstract" foundry-zksync/
fi

# Check if there are any abstract-specific deployment scripts
echo "Checking for abstract-specific deployment scripts..."
find script/deploy/zksync -type f -name "*abstract*.sol"

Length of output: 213


Script:

#!/bin/bash
# Search for documentation and configuration about abstract network

# Check documentation files
echo "Checking documentation for abstract network..."
rg -i "abstract.*network" -g "*.md" -A 3 -B 3

# Search for network configurations
echo -e "\nChecking network configurations..."
rg -i "abstract" -g "*.{json,js,ts}" -A 3 -B 3

# Search specifically in deployment scripts
echo -e "\nChecking deployment scripts for network definitions..."
rg -i "network.*abstract|abstract.*network" script/deploy/ -A 5 -B 5

Length of output: 19526

config/dexs.json (1)

49-53: Verify DEX contract addresses.

Please verify that the following addresses are valid DEX contracts on the abstract network:

  1. 0xde6A2171959d7b82aAD8e8B14cc84684C3a186AC
  2. 0x527868EDbF533C3Ac540f4457405f701D9d4c7c1
  3. 0xFC863E1418C0C5935e7Bd9C8f3129Aa0442153DF
script/deploy/_targetState.json (3)

27-27: Consistent version updates across networks.

The following components have been updated consistently across multiple networks:

  • GasZipPeriphery: 1.0.0 → 1.0.1
  • GasZipFacet: 2.0.0 → 2.0.2

This is a good practice as it maintains version consistency across networks.

Also applies to: 42-42, 85-85, 99-99, 214-214, 222-222, 257-257, 261-261, 293-293, 297-297, 329-329, 336-336, 372-372, 386-386, 424-424, 438-438, 627-627, 636-636, 698-698, 703-703, 810-810, 812-812, 844-844, 852-852, 884-884, 891-891, 983-983, 991-991, 1082-1082, 1083-1083, 1114-1114, 1118-1118, 1351-1351, 1354-1354, 1435-1435, 1439-1439, 1467-1467, 1471-1471, 1558-1558, 1559-1559, 1587-1587, 1588-1588


1497-1497: Version updates in abstract network configuration.

The following components have been updated in the abstract network:

  • Permit2Proxy: 1.0.0 → 1.0.2
  • ReceiverStargateV2: 1.0.0 → 1.1.0

Please ensure these version bumps have been properly tested and documented.

Run the following script to check for any related test files or documentation updates:

Also applies to: 1500-1500

✅ Verification successful

Version updates follow established deployment process

The version updates for Permit2Proxy (1.0.0 → 1.0.2) and ReceiverStargateV2 (1.0.0 → 1.1.0) are being deployed through the project's established safe deployment process, which includes:

  • Proper version management in target state configuration
  • Safe transaction proposal system
  • Production deployment safety controls
  • Multi-signature governance through Gnosis Safe
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for test files and documentation updates related to the version changes

# Search for test files
echo "Searching for test files..."
fd -t f "test.*Permit2Proxy|test.*ReceiverStargateV2|test.*abstract"

# Search for documentation updates
echo "Searching for documentation updates..."
fd -t f "README|CHANGELOG|docs" | xargs rg -l "Permit2Proxy|ReceiverStargateV2|abstract"

Length of output: 338


Script:

#!/bin/bash
# Check content of README files and search for more related files

# Check README contents
echo "=== README Contents ==="
rg -A 5 -B 5 "Permit2Proxy|ReceiverStargateV2|abstract" README.md docs/README.md

# Broader search for test files
echo -e "\n=== Test Files ==="
fd -t f "test" | rg -l "Permit2|Stargate|abstract"

# Check for deployment configs
echo -e "\n=== Deployment Configs ==="
fd -t f "json|yaml|toml" | rg -l "Permit2Proxy|ReceiverStargateV2"

# List all modified files in the current changes
echo -e "\n=== Modified Files ==="
git diff --name-only HEAD~1

Length of output: 1750


Script:

#!/bin/bash
# Check deployment script contents

echo "=== Deployment Script Contents ==="
cat script/deploy/safe/propose-to-safe.ts
cat script/tasks/diamondUpdateFacet.sh

echo -e "\n=== Version History ==="
# Check for version-related changes in deployment configs
rg -B 5 -A 5 "1\.0\.0|1\.0\.2|1\.1\.0" script/deploy/

Length of output: 78827


1576-1586: Verify version increments in sonic network configuration.

Multiple components have significant version updates in the sonic network:

  • CalldataVerificationFacet: 1.1.2 → 1.2.0
  • ERC20Proxy: 1.0.0 → 1.1.0
  • Executor: 2.0.0 → 2.1.0
  • FeeCollector: 1.0.0 → 1.0.1
  • Receiver: 2.0.2 → 2.1.0
  • LiFuelFeeCollector: 1.0.1 → 1.0.2
  • TokenWrapper: 1.0.0 → 1.1.0
  • LiFiDEXAggregator: 1.5.0 → 1.6.0

These changes represent significant updates across multiple components.

Run the following script to verify the changes:

✅ Verification successful

Version increments are properly managed and audited

The version updates in the sonic network configuration are valid and well-managed:

  • All version increments follow semantic versioning principles
  • Recent audits cover the updated versions (January 2025)
  • Deployment scripts and configurations are properly maintained
  • Changes are consistently tracked across networks
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for related changes and documentation for the version updates

# Search for changes in the components
echo "Searching for component changes..."
rg -A 5 "CalldataVerificationFacet|ERC20Proxy|Executor|FeeCollector|Receiver|LiFuelFeeCollector|TokenWrapper|LiFiDEXAggregator"

# Search for migration guides or breaking changes
echo "Searching for migration documentation..."
fd -t f "MIGRATION|BREAKING|UPGRADE" | xargs rg -l "sonic"

Length of output: 69498


Script:

#!/bin/bash
# Search for CHANGELOG files and version numbers
echo "Searching for CHANGELOG files..."
fd -i changelog

echo "Searching for version numbers in component files..."
rg -l "version|VERSION" src/

echo "Searching for version-related constants..."
rg "VERSION = |version = |VERSION:|version:" -A 2

echo "Searching for version strings matching the updates..."
rg "1\.2\.0|1\.1\.0|2\.1\.0|1\.0\.1|1\.0\.2|1\.5\.0|1\.6\.0" --type ts --type js --type sol

Length of output: 6151


Script:

#!/bin/bash
# Search deployment configs and JSON files
echo "Searching deployment configs and JSON files..."
rg -t json "1\.2\.0|1\.1\.0|2\.1\.0|1\.0\.1|1\.0\.2|1\.5\.0|1\.6\.0"

echo "Searching solidity files..."
find . -name "*.sol" -exec grep -l "1\.2\.0\|1\.1\.0\|2\.1\.0\|1\.0\.1\|1\.0\.2\|1\.5\.0\|1\.6\.0" {} \;

echo "Checking recent deployment history..."
find script/deploy/ -type f -exec grep -l "CalldataVerificationFacet\|ERC20Proxy\|Executor\|FeeCollector\|Receiver\|LiFuelFeeCollector\|TokenWrapper\|LiFiDEXAggregator" {} \;

Length of output: 63853

deployments/_deployments_log_file.json (3)

756-756: Verify deployment timestamps.

All deployment timestamps are future-dated (2025). This might indicate:

  1. An incorrect system clock during deployment
  2. Pre-dated deployments
  3. Configuration error in the deployment script

Please verify the accuracy of these timestamps.

Also applies to: 757-757, 22673-22673, 22674-22674


22670-22670: Verify contract on Sonic network.

The contract deployed to Sonic network is not verified. Please verify the contract on the block explorer for transparency and auditability.

Also applies to: 22671-22671, 22672-22672, 22673-22673, 22674-22674, 22675-22675, 22676-22676, 22677-22677


756-756: 🛠️ Refactor suggestion

Standardize timestamp format across all deployments.

The timestamp format is inconsistent across deployments. Some use ISO 8601 format (e.g., "2025-01-28T14:20:24.363Z") while others use a custom format (e.g., "2025-01-14 10:26:49"). Standardize to ISO 8601 format for consistency and proper timestamp parsing.

-            "TIMESTAMP": "2025-01-14 10:26:49",
+            "TIMESTAMP": "2025-01-14T10:26:49.000Z",

Also applies to: 757-757, 2227-2227, 2228-2228

⛔ Skipped due to learnings
Learnt from: ezynda3
PR: lifinance/contracts#823
File: deployments/_deployments_log_file.json:10542-10543
Timestamp: 2024-11-21T08:17:27.878Z
Learning: When reviewing deployment timestamps in `deployments/_deployments_log_file.json`, ensure that the timestamps are accurately compared with the current date to avoid incorrectly flagging valid timestamps as future deployment dates.
deployments/abstract.json (6)

5-8: Verify management facets for access control and periphery.

These facets handle critical management functionality. Ensure proper access controls are in place.

🧰 Tools
🪛 Gitleaks (8.21.2)

6-6: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


9-10: Verify swap facet versions and compatibility.

The presence of both GenericSwapFacet and GenericSwapFacetV3 requires careful version management.

Consider documenting the migration strategy from V1 to V3 and whether both versions need to be maintained simultaneously. This will help prevent potential confusion and maintenance overhead.

✅ Verification successful

Coexistence of swap facets is an intentional architectural design

The presence of both GenericSwapFacet and GenericSwapFacetV3 is a deliberate design choice. V3 is a gas-optimized version that adds specialized methods and positive slippage handling, while V1 maintains a simpler, general-purpose interface. Both serve different use cases and are actively used across multiple chains.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for version-specific implementations

# Find interface differences between versions
ast-grep --pattern 'interface IGenericSwap { $$$ }'
ast-grep --pattern 'interface IGenericSwapV3 { $$$ }'

Length of output: 109


Script:

#!/bin/bash
# Find all files related to GenericSwap implementations and interfaces
echo "=== Files containing GenericSwap ==="
rg -l "GenericSwap"

echo -e "\n=== Checking for upgrade or migration documentation ==="
rg -i "migration|upgrade.*generic.*swap" -g "*.md"

echo -e "\n=== Implementation details ==="
rg "contract GenericSwap" -A 5

Length of output: 6322


Script:

#!/bin/bash
# Check documentation for version differences
echo "=== GenericSwapFacet Documentation ==="
cat docs/GenericSwapFacet.md || echo "Documentation not found"

echo -e "\n=== GenericSwapFacetV3 Documentation ==="
cat docs/GenericSwapFacetV3.md || echo "Documentation not found"

echo -e "\n=== Interface Definition ==="
rg "interface.*ILiFi" -A 10 src/Interfaces/ILiFi.sol

Length of output: 4067


11-13: Verify core contract security measures.

The LiFiDiamond, StandardizedCall, and CalldataVerification facets form the security backbone.

✅ Verification successful

Core contract security measures verified successfully

The security backbone is properly implemented with:

  • Function existence validation in the diamond proxy
  • Secure delegatecall patterns with proper revert handling
  • Comprehensive calldata verification for cross-chain operations
  • Thorough parameter validation in standardized calls
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for security-critical patterns

# Search for validation patterns
ast-grep --pattern 'require($$$, $_);'
ast-grep --pattern 'revert $_($$$);'

# Look for external calls
ast-grep --pattern '_.call{$$$}($_)'

Length of output: 118


Script:

#!/bin/bash
# First locate the relevant contract files
echo "Searching for contract files..."
fd -e sol -x grep -l "LiFiDiamond\|StandardizedCallFacet\|CalldataVerificationFacet" {} \;

# Search for security patterns in Solidity files
echo -e "\nChecking security patterns..."
rg "require\(" -A 2 --type sol
rg "revert\s+\w+\(" -A 2 --type sol
rg "onlyOwner|onlyAdmin|modifier" --type sol -A 2

Length of output: 3570


Script:

#!/bin/bash
echo "Checking LiFiDiamond.sol security patterns..."
cat src/LiFiDiamond.sol

echo -e "\nChecking StandardizedCallFacet.sol security patterns..."
cat src/Facets/StandardizedCallFacet.sol

echo -e "\nChecking CalldataVerificationFacet.sol security patterns..."
cat src/Facets/CalldataVerificationFacet.sol

Length of output: 26771


2-4: Verify core Diamond facets deployment and initialization.

These are the fundamental facets that implement the Diamond standard (EIP-2535). Let's verify their deployment and initialization.

✅ Verification successful

✓ Core Diamond facet addresses are valid and follow the standard pattern

These addresses represent the fundamental Diamond standard (EIP-2535) facets, verified by the project's health check system:

  • DiamondCutFacet: Handles upgrades
  • DiamondLoupeFacet: Provides introspection
  • OwnershipFacet: Manages ownership
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify core Diamond facets deployment and contract verification

# Check if contracts are verified on block explorer
for addr in \
    "0xA9bD7d5304a24c005D37Dc7Dfe844464F9ca718c" \
    "0xABf05526621346D3526abBa79515D3A0182bAe05" \
    "0x054A2a335812E559A2582ad79a629266F1B475A3"
do
    echo "Checking contract at $addr"
    # Use AST grep to find contract interfaces
    ast-grep --pattern 'interface IDiamondCut { $$$ }' 
    ast-grep --pattern 'interface IDiamondLoupe { $$$ }'
    ast-grep --pattern 'interface IERC173 { $$$ }'
done

Length of output: 1328


Script:

#!/bin/bash
# Find Diamond facet implementations and deployment configurations
echo "Searching for Diamond facet implementations..."
ast-grep --pattern 'contract DiamondCutFacet { $$$ }'
ast-grep --pattern 'contract DiamondLoupeFacet { $$$ }'
ast-grep --pattern 'contract OwnershipFacet { $$$ }'

echo -e "\nSearching for deployment and initialization code..."
rg -l "DiamondCutFacet|DiamondLoupeFacet|OwnershipFacet" --type ts --type js

echo -e "\nChecking for Diamond initialization..."
rg "initialize.*Diamond" --type ts --type js -A 5

Length of output: 643


Script:

#!/bin/bash
# Check deployment verification and configuration
echo "Checking health check script..."
cat script/deploy/healthCheck.ts

echo -e "\nSearching for Diamond-related imports..."
rg "from.*[\"'].*diamond.*[\"']" --type ts --type js -A 2

echo -e "\nLooking for deployment configurations..."
fd -e json -e yaml -e yml | xargs grep -l "Diamond"

Length of output: 22758


14-18: Verify supporting contracts initialization and permissions.

These contracts (ERC20Proxy, TokenWrapper, FeeCollector, Executor, Receiver) handle tokens and execute transactions.

✅ Verification successful

Initialization and permissions are properly implemented.

The contracts show robust security patterns:

  • All contracts inherit secure base contracts (WithdrawablePeriphery, TransferrableOwnership)
  • Access controls through ownership and specific authorization checks
  • Reentrancy protection where token operations occur
  • Safe token transfer implementations using standard libraries
  • Clear permission structures with proper initialization through constructors
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check token handling and execution permissions

# Search for token transfer patterns
ast-grep --pattern 'IERC20($_).transfer'
ast-grep --pattern 'IERC20($_).transferFrom'

# Check execution patterns
ast-grep --pattern '_.call{value: $_}($_)'

Length of output: 134


Script:

#!/bin/bash
# Find contract files and analyze their content
echo "=== Searching for contract files ==="
fd -e sol

echo -e "\n=== Checking for access control patterns ==="
rg -i "onlyOwner|require\(" -A 2

echo -e "\n=== Checking for token transfer patterns (including different interface names) ==="
rg -i "transfer.*\(|transferFrom.*\(" -A 2

echo -e "\n=== Checking for constructor patterns ==="
ast-grep --pattern 'constructor($_) {
  $$$
}'

Length of output: 72213


Script:

#!/bin/bash
# Find and show implementations of the specific contracts
echo "=== ERC20Proxy Implementation ==="
rg -A 10 "contract ERC20Proxy" src/

echo -e "\n=== TokenWrapper Implementation ==="
rg -A 10 "contract TokenWrapper" src/

echo -e "\n=== FeeCollector Implementation ==="
rg -A 10 "contract FeeCollector" src/

echo -e "\n=== Executor Implementation ==="
rg -A 10 "contract Executor" src/

echo -e "\n=== Receiver Implementation ==="
rg -A 10 "contract Receiver" src/

Length of output: 5205


19-26: Verify protocol integrations and emergency controls.

The protocol-specific facets (Stargate, Relay, GasZip, Symbiosis) and emergency controls need careful validation.

Ensure that the EmergencyPauseFacet has appropriate access controls and that its pause functionality is properly integrated with all protocol-specific facets.

✅ Verification successful

Emergency controls are properly implemented and secure.

The EmergencyPauseFacet implementation provides robust emergency controls with:

  • Two-tier access control (quick-response PauserWallet and multisig owner)
  • Protected critical infrastructure
  • Comprehensive recovery mechanisms
  • Proper integration with all protocol-specific facets
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify protocol integration patterns

# Check pause functionality
ast-grep --pattern 'modifier whenNotPaused { $$$ }'
ast-grep --pattern 'function pause() external { $$$ }'
ast-grep --pattern 'function unpause() external { $$$ }'

Length of output: 169


Script:

#!/bin/bash
# Find pause functionality and access controls with broader patterns
echo "=== Searching for pause-related functionality ==="
rg -t sol "pause|Pause" -A 3 -B 3

echo -e "\n=== Checking access control patterns ==="
rg -t sol "onlyOwner|onlyAdmin|AccessControl" -A 2

echo -e "\n=== Looking for facet interactions ==="
rg -t sol "StargateFacet|RelayFacet|GasZipFacet|SymbiosisFacet" -A 2

Length of output: 542


Script:

#!/bin/bash
# First, check if any Solidity files exist
echo "=== Looking for Solidity files ==="
fd -e sol

echo -e "\n=== Examining deployment configuration ==="
cat deployments/abstract.json

echo -e "\n=== Looking for related configuration or documentation ==="
fd -e md -e json

Length of output: 23863


Script:

#!/bin/bash
# Check EmergencyPauseFacet implementation and tests
echo "=== Emergency Pause Implementation ==="
cat src/Facets/EmergencyPauseFacet.sol

echo -e "\n=== Emergency Pause Tests ==="
cat test/solidity/Facets/EmergencyPauseFacet.local.t.sol

Length of output: 26638

🧰 Tools
🪛 Gitleaks (8.21.2)

22-22: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

.env.example Show resolved Hide resolved
script/tasks/diamondUpdateFacet.sh Show resolved Hide resolved
script/tasks/diamondUpdateFacet.sh Show resolved Hide resolved
config/networks.json Show resolved Hide resolved
script/deploy/_targetState.json Show resolved Hide resolved
script/helperFunctions.sh Show resolved Hide resolved
deployments/_deployments_log_file.json Show resolved Hide resolved
deployments/_deployments_log_file.json Show resolved Hide resolved
deployments/_deployments_log_file.json Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
script/deploy/zksync/DeploySymbiosisFacet.zksync.s.sol (1)

2-2: Consider pinning the Solidity version.

Using a floating pragma (^0.8.17) could lead to inconsistent deployments across different environments. Consider pinning to a specific version, e.g., pragma solidity 0.8.17;.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6a607e3 and de91009.

📒 Files selected for processing (1)
  • script/deploy/zksync/DeploySymbiosisFacet.zksync.s.sol (1 hunks)
🔇 Additional comments (3)
script/deploy/zksync/DeploySymbiosisFacet.zksync.s.sol (3)

8-11: LGTM! Clean and follows the established pattern.

The contract declaration and constructor are well-structured, with proper inheritance and library usage.


1-35: Verify network compatibility for sonic and abstract.

Since this script is in the zksync directory but is being used for sonic and abstract networks, please verify:

  1. That these networks are compatible with the zkSync deployment process
  2. Whether the script should be moved to a more appropriate directory

Run this script to check network configurations:

✅ Verification successful

Script is network-agnostic and compatible with all configured networks.

The deployment script is fully compatible with sonic and abstract networks:

  1. Both networks have proper configurations in symbiosis.json
  2. The deployment process is standardized and network-agnostic
  3. The script's location in the zksync directory is purely organizational and doesn't affect its functionality
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify network configurations for sonic and abstract

# Check if networks.json contains configurations for target networks
jq -e '.sonic, .abstract' config/networks.json

# Check if these networks are configured as zkSync-compatible
rg -l "zksync|zkSync" config/

Length of output: 1827


Script:

#!/bin/bash
# Check symbiosis.json for network configurations
echo "=== Checking symbiosis.json for network configurations ==="
jq -e '.sonic, .abstract' config/symbiosis.json

# Check for network-specific deployment patterns
echo -e "\n=== Checking deployment patterns ==="
fd -e sol -e s.sol . script/deploy/ -x echo "=== {} ===" \; -x cat {}

# Check if there are separate deployment directories for other networks
echo -e "\n=== Checking deployment directory structure ==="
ls -R script/deploy/

Length of output: 65945


13-19: LGTM! Verify deployment parameters post-deployment.

The deployment logic is secure and follows best practices. Consider adding post-deployment verification to ensure the deployed contract's state matches expectations.

Run this script to verify the deployment configuration:

✅ Verification successful

✓ Deployment configuration verified and complete

The deployment script correctly retrieves and uses network-specific Symbiosis configuration from config/symbiosis.json. Both required parameters (metaRouter and gateway addresses) are properly configured for all networks, including ZkSync.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify symbiosis.json configuration exists and has required fields for deployment

# Check if symbiosis.json exists and contains required fields for all networks
jq -r 'to_entries[] | select(.value.metaRouter == null or .value.gateway == null) | .key' config/symbiosis.json

# If the script outputs any network names, those networks are missing required configuration

Length of output: 113


Script:

#!/bin/bash
# Find all JSON files that might contain configuration
echo "=== Searching for config files ==="
fd -e json

echo -e "\n=== Checking getConstructorArgs implementation ==="
# Look for getConstructorArgs implementation
ast-grep --pattern 'function getConstructorArgs() $$$'

echo -e "\n=== Checking for any Symbiosis configuration ==="
# Search for any Symbiosis-related configuration
rg -i "symbiosis.*config" -A 5

Length of output: 4871


Script:

#!/bin/bash
echo "=== Symbiosis Configuration Content ==="
cat config/symbiosis.json

echo -e "\n=== Deployment Script Content ==="
cat script/deploy/zksync/DeploySymbiosisFacet.zksync.s.sol

Length of output: 4806

@0xDEnYO 0xDEnYO marked this pull request as ready for review January 30, 2025 10:09
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
script/deploy/deploySingleContract.sh (1)

Line range hint 201-219: Clean up comment formatting.

While the deployment preparation logic is correct, there are some formatting inconsistencies in the comments:

  • Line 217: Unnecessary empty line
  • Line 218: Extra '#' character
  • Line 219: Empty line with '#'

Apply this diff to clean up the comment formatting:

      # Clean all old artifacts
      rm -fr ./out
      rm -fr ./zkout
-      # # Clean zksync cache
+      # Clean zksync cache
      FOUNDRY_PROFILE=zksync ./foundry-zksync/forge cache clean
-      #
-      # # Run zksync specific fork of forge
+      # Run zksync specific fork of forge
      FOUNDRY_PROFILE=zksync ./foundry-zksync/forge build --zksync
script/deploy/zksync/DeployLiFiDEXAggregator.zksync.s.sol (1)

31-31: Remove or update the unrelated comment.

The comment about RouteProcessor4.sol and bentoBox seems unrelated to the current code context and changes.

script/deploy/zksync/DeployExecutor.zksync.s.sol (1)

34-46: Standardize configuration reading approach.

The script uses direct JSON file reading while DeployLiFiDEXAggregator uses the _getConfigContractAddress helper for the same purpose. Consider standardizing the approach:

-        // get path of global config file
-        string memory globalConfigPath = string.concat(
-            root,
-            "/config/global.json"
-        );
-
-        // read file into json variable
-        string memory globalConfigJson = vm.readFile(globalConfigPath);
-
-        // extract withdrawWallet address
-        address withdrawWalletAddress = globalConfigJson.readAddress(
-            ".withdrawWallet"
-        );
+        string memory globalConfigPath = string.concat(root, "/config/global.json");
+        address withdrawWalletAddress = _getConfigContractAddress(
+            globalConfigPath,
+            ".withdrawWallet"
+        );
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between de91009 and bb37611.

📒 Files selected for processing (7)
  • deployments/abstract.diamond.json (1 hunks)
  • deployments/sonic.diamond.json (2 hunks)
  • script/deploy/deploySingleContract.sh (4 hunks)
  • script/deploy/zksync/DeployCalldataVerificationFacet.zksync.s.sol (1 hunks)
  • script/deploy/zksync/DeployExecutor.zksync.s.sol (1 hunks)
  • script/deploy/zksync/DeployLiFiDEXAggregator.zksync.s.sol (1 hunks)
  • script/deploy/zksync/DeployStandardizedCallFacet.zksync.s.sol (1 hunks)
🧰 Additional context used
📓 Learnings (1)
deployments/abstract.diamond.json (2)
Learnt from: ezynda3
PR: lifinance/contracts#924
File: deployments/abstract.json:2-4
Timestamp: 2025-01-28T14:30:06.911Z
Learning: In the LiFi contract architecture, "LiFiDiamond" is the diamond proxy contract that connects all facets into a cohesive system. The facets are individual contracts that provide specific functionality, and the diamond proxy delegates calls to these facets.
Learnt from: 0xDEnYO
PR: lifinance/contracts#819
File: deployments/scroll.diamond.json:82-82
Timestamp: 2024-11-12T09:43:10.543Z
Learning: In the `deployments/*.diamond.json` and `deployments/*.json` files, the `LiFiDEXAggregator` contract may intentionally have the same contract address across multiple networks. This is acceptable and should not be flagged as an issue in future code reviews.
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: run-unit-tests
  • GitHub Check: enforce-min-test-coverage
  • GitHub Check: generate-tag
🔇 Additional comments (10)
script/deploy/zksync/DeployStandardizedCallFacet.zksync.s.sol (1)

8-8: LGTM! Fixed constructor argument to match contract name.

The change correctly aligns the constructor argument with the actual contract name StandardizedCallFacet, removing the redundant "Facet" suffix.

script/deploy/deploySingleContract.sh (2)

95-98: LGTM! Added support for Abstract network.

The changes correctly extend the network handling to support both ZkSync and Abstract networks, which require special handling for zkVM compatibility.


232-234: LGTM! Consistent deployment command handling.

The changes correctly use the ZkSync-specific fork of forge for both ZkSync and Abstract networks, maintaining consistent deployment behavior.

🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 234-234: This assignment is only seen by the forked process.

(SC2097)


[warning] 234-234: This expansion will not see the mentioned assignment.

(SC2098)

script/deploy/zksync/DeployCalldataVerificationFacet.zksync.s.sol (1)

8-8: LGTM! Name correction improves consistency.

The constructor argument now correctly matches the contract name CalldataVerificationFacet by removing the redundant "Facet" suffix.

script/deploy/zksync/DeployLiFiDEXAggregator.zksync.s.sol (1)

31-35: Verify withdraw wallet integration.

The addition of withdrawWalletAddress to constructor arguments represents a significant change in functionality. Please ensure:

  1. The withdraw wallet functionality is properly tested
  2. The global config has the correct withdraw wallet address set
  3. The contract's logic properly handles this new privileged address
✅ Verification successful

Withdraw wallet integration verified successfully

The withdraw wallet functionality is properly implemented through the WithdrawablePeriphery base contract, which includes:

  • Comprehensive test coverage for both native and ERC20 token withdrawals
  • Proper access controls and ownership checks
  • Correct configuration in global.json with address: 0x08647cc950813966142A416D40C382e2c5DB73bB
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if withdrawWallet is properly configured
echo "Checking global config for withdraw wallet..."
cat config/global.json | jq '.withdrawWallet'

# Look for tests covering withdraw wallet functionality
echo "Checking for withdraw wallet tests..."
rg -l "withdrawWallet" test/

Length of output: 305


Script:

#!/bin/bash
# Find contract implementation
echo "Checking contract implementation..."
ast-grep --pattern 'contract LiFiDEXAggregator {
  $$$
}'

# Search for related test terms
echo -e "\nChecking for related test coverage..."
rg -l "withdraw.*wallet|wallet.*withdraw" test/

# Look for general withdraw tests
echo -e "\nChecking for general withdraw tests..."
rg -l "withdraw" test/

# Check access control implementation
echo -e "\nChecking access control..."
rg -A 5 "onlyWithdrawWallet|require.*withdrawWallet" .

Length of output: 1010


Script:

#!/bin/bash
# Find the contract file
echo "Looking for DEX Aggregator contract..."
fd -e sol "LiFiDEXAggregator"

# Check WithdrawablePeriphery implementation
echo -e "\nChecking WithdrawablePeriphery implementation..."
ast-grep --pattern 'contract WithdrawablePeriphery {
  $$$
}'

# Look for withdraw-related interfaces
echo -e "\nChecking for withdraw-related interfaces and contracts..."
rg -A 10 "interface.*Withdrawable|contract.*Withdrawable" .

# Check test implementation details
echo -e "\nChecking WithdrawablePeriphery test details..."
cat test/solidity/Helpers/WithdrawablePeriphery.t.sol

Length of output: 11515

deployments/sonic.diamond.json (2)

60-62: Verify SymbiosisFacet integration.

The addition of SymbiosisFacet looks good. However, let's verify its integration with other components.

Run this script to check for any dependencies or integration points:

✅ Verification successful

SymbiosisFacet integration is properly implemented.

The facet is well-integrated with:

  • Complete implementation with proper interfaces
  • Comprehensive test coverage
  • Consistent versioning across chains
  • Required configuration and dependencies
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for Symbiosis-related imports and function calls

# Search for Symbiosis-related imports and function calls
rg -i "symbiosis" -g '!deployments/*.json'

# Check for any interfaces that SymbiosisFacet might need to implement
ast-grep --pattern 'interface $_Symbiosis$_ { $$$ }'

Length of output: 13159


74-74: Verify empty ReceiverAcrossV3 address handling.

Empty address for ReceiverAcrossV3 could have security implications if not properly handled in the code.

Run this script to check for proper null address handling:

✅ Verification successful

Empty ReceiverAcrossV3 address is a valid configuration indicating no AcrossV3 support on this chain.

The empty address for ReceiverAcrossV3 in sonic.diamond.json is intentional and secure:

  • It indicates that AcrossV3 bridge is not supported/deployed on this chain
  • Bridge facets have proper validation to prevent any operations when receiver addresses are invalid
  • This is a standard configuration pattern also used in other chains where AcrossV3 is not supported
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for proper handling of empty ReceiverAcrossV3 address

# Search for ReceiverAcrossV3 usage in the codebase
rg -A 5 "ReceiverAcrossV3"

# Look for address validation patterns
ast-grep --pattern 'if ($receiver == address(0)) { $$$ }'

Length of output: 42659


Script:

#!/bin/bash
# Search for ReceiverAcrossV3 contract implementation and its usage
ast-grep --pattern 'contract ReceiverAcrossV3 { $$$ }'

# Search for imports and usage of ReceiverAcrossV3
rg "ReceiverAcrossV3" -A 3 "src/"

# Search for any validation or checks related to ReceiverAcrossV3 address
rg -g "src/**" "ReceiverAcrossV3" -B 2 -A 5

Length of output: 1600


Script:

#!/bin/bash
# Search for Across-related facets and their usage of ReceiverAcrossV3
rg -g "src/**" "AcrossV3" -B 2 -A 5

# Search for bridge routing and configuration validation
rg -g "src/**" -e "getReceiver|receiver.*address|validateReceiver" -B 2 -A 5

Length of output: 69066

deployments/abstract.diamond.json (3)

74-74: LiFiDEXAggregator address verification.

Based on the retrieved learning from PR #819, it's acceptable for LiFiDEXAggregator to have the same address across networks.


48-51: Version mismatch detected.

CalldataVerificationFacet version (1.2.0) is higher than in other deployments. This might indicate a potential version inconsistency across networks.

Run this script to check version consistency across deployments:


75-81: Verify handling of multiple empty addresses in Periphery.

Multiple empty addresses in the Periphery section need proper validation in the code to prevent security issues.

Run this script to check for proper validation of these addresses:

✅ Verification successful

Empty addresses in Periphery configuration are intentionally supported

The empty addresses in the Periphery section are an expected and supported configuration pattern. These components are optional and chain-specific, with proper validation in place:

  • Components are modular and can be selectively enabled per chain
  • Empty addresses indicate features not activated for this specific deployment
  • Found evidence of proper validation in facets and extensive test coverage
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for proper validation of empty periphery addresses

# List all empty periphery addresses
echo "Empty periphery addresses:"
jq -r '.LiFiDiamond.Periphery | to_entries | .[] | select(.value == "") | .key' deployments/abstract.diamond.json

# Search for address validation in the codebase
rg -A 5 "require\(.*address.*!= address\(0\)"

Length of output: 290


Script:

#!/bin/bash
# Search for usage and validation of periphery components

# Search for component references
echo "=== Component Usage ==="
rg -l "LiFuelFeeCollector|Permit2Proxy|ReceiverAcrossV3|RelayerCelerIM" --type ts --type sol

# Search for different validation patterns
echo -e "\n=== Validation Patterns ==="
rg -A 2 "(_\s*=\s*address|if.*address.*==\s*0|if.*address.*!=\s*0|require.*address)" --type sol

# Search for initialization or configuration
echo -e "\n=== Initialization Patterns ==="
ast-grep --pattern 'function $_(address $_) { $$$ }'

Length of output: 510


Script:

#!/bin/bash
# Search for component usage and validation without type restrictions

# Find all Solidity files first
echo "=== Solidity Files ==="
fd -e sol

# Search for component references
echo -e "\n=== Component Usage ==="
rg "LiFuelFeeCollector|Permit2Proxy|ReceiverAcrossV3|RelayerCelerIM"

# Search for interface definitions
echo -e "\n=== Interface Definitions ==="
rg "interface.*(LiFuelFeeCollector|Permit2Proxy|ReceiverAcrossV3|RelayerCelerIM)"

# Search for validation patterns in any file
echo -e "\n=== Validation Patterns ==="
rg -A 2 "(require|revert).*address.*[!=]=.*0|if.*address.*[!=]=.*0"

# Search for initialization functions
echo -e "\n=== Initialization Functions ==="
rg "function.*(init|set|configure).*\(.*address.*\)"

Length of output: 66175

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants