Fx-evm mainnet

To all validators, just to be clear, this is a hard-fork upgrade. Kindly read the instructions for v2.2.0 clearly.

There will not be any voting before the validator upgrade. this upgrade is to enable certain features of the blockchain. to activate these features, the governance proposal will happen after the upgrade

2 Likes

The testnet upgrade countdown can be seen here.

We will be using this link for mainnet as well.

There will be a unified explorer. https://explorer.functionx.io/
just add “evm” after the home page->Function X StarScan

3 Likes

Dear Validators,

  • The mainnet upgrade for v2.2 is ready
  • Both mainnet and testnet v2.2.0 have to be upgraded this time by Monday 1 Aug, by about 10am (GMT+8)
  • You may find the upgrade guide for both binaries and docker here
  • Information on the cross-chain bridges can be found here
  • We have prepared an Upgrade FAQ
  • The Upgrade countdown timer can be found here
6 Likes

Hi @Richard !

As explained on TG, here are the issues I found (on testnet upgrade only):

  1. My testnet validator appears INACTIVE in dhobyghaut-explorer
  2. Its latest block is 3920765 (25/JUL/2022)
  3. Its voting power still appears at value 138.
  4. When trying to unjail, response is Error: rpc error: code = InvalidArgument desc = failed to execute message; message index: 0: validator not jailed; cannot be unjailed: invalid request
  5. Looks like almost all testnet validators are now inactive…
  6. Signing info appears the following :
  index_offset: "1983519"
  jailed_until: "2022-01-21T03:33:02.215566730Z"
  missed_blocks_counter: "62"
  start_height: "370259"
  tombstoned: false

fxcored version on my testnet node is release/v2.2.x-673b54983e0b52feb69ed4382f7ec44fbd80104c

These are the kind of non-stop error messages I’m getting:
a) Jul 28 16:14:10 fxcored[20262]: 4:14PM ERR Stopping peer for error err="blockchainReactor validation error: wrong Block.Header.AppHash. Expected 6C93556A367CAF87A7F53539D58E3E646CDDFC44C174460E5A3AEB2AC07E59CD, got F6D442CD6D6449A66A4F61E244EF1E5EC1EC03FDE5D70DF401376A42C3EFB64F" module=p2p peer={"Data":{},"Logger":{}} server=node
b) Jul 28 16:14:09 fxcored[20262]: 4:14PM ERR error while stopping peer error="already stopped" module=p2p server=node

Could you please ask the team to investigate ?
Thanks !

1 Like
  • @FrenchXCore there is a need for another round of upgrade. on testnet. So this time, both mainnet and testnet have to be upgraded. There was a fix of a bug from the last upgrade which is why testnet needs to be upgraded again. @ClaudioxBarros
  • @Fox_Coin @Aravan please refer to this Upgrade countdown timer can be found here. for all future upgrades, the countdown timer link will be the same.
  • @ClaudioxBarros you can perform any commands from another terminal through your validator node. just use the CLI and specify the --node flag and input the ip address of your node. Something like this example command

apologies to the validators who have already upgraded your nodes. we forgot to merge the latest commit into the release/v.2.2.x branch, so you will have to upgrade again.

@ClaudioxBarros @FrenchXCore @KuzoIV @nexus

if you run git log, you should be able to see the tag v2.2.1 inside
commit 5eef07630e89ad0bd786fb08fa0fb937e5c84d67 (HEAD → release/v2.2.x, tag: v2.2.1)

4 Likes

Thanks for notification, i proccesed the upgrade steps again and now the DAOverse node have latest adds :white_check_mark:.

1 Like

Your version is v2.1.1. The height of the last testnet upgrade was 3918000. You did not upgrade to v2.2.0 before this height. After the height, if there is an upgraded logic triggered by a transaction, the unupgraded node will fork.

Solution:

  1. either you upgrade with v2.2.0 before this height after rolling back the data or
  2. my recommendation clear out all the old data and use the snapshot

Yes, I downgraded my TestNet validator to v2.1.1 to see if it was working. It wasn’t.
Then i started a full node reinstall.

@Richard @Chloechloe,

I successfully updated our mainnet nodes to v2.2.1 (release d5d5…).
However, I still have the same issue as specified in quotes on testnet node.
Here’s some more error log:

Jul 29 11:08:52 fxcored[46911]: panic: Failed to start consensus state: found signature from the same key
Jul 29 11:08:52 fxcored[46911]: 11:08AM INF found signature from the same key height=3920765 idx=26 module=consensus server=node sig={"block_id_flag":2,"signature":"u917oh36nEU7I01n6x7kT/W2bGpohh8jleN3GSbsejV>
Jul 29 11:08:52 fxcored[46911]: 11:08AM ERR error on catchup replay; proceeding to start state anyway err="cannot replay height 3920766. WAL does not contain #ENDHEIGHT for 3920765" module=consensus server=no>
J

My testnet node keeps blocking at height 3920765.

Thx,
FrenchXCore

1 Like

in your log i saw line containing:
Jul 29 11:08:52 fxcored[46911]: panic: Failed to start consensus state: found signature from the same key

the fix for this was given before

What should I do if the “panic: Failed to start consensus state: found signature from the same key” appears in log?
We should first check whether the priv_validator_key.json configured by the node is used by its node, if not, run the command fxcored config config.toml consensus.double_sign_check_height 0, to modify the double-sign check, and then restart the node

could u try whether it fix and screenshot any other error that persists

Hi @lancelai !

So, my testnet node won’t progress above block #3920765.
Block #3920766 still generates a commit is for a block we do not know about; set ProposalBlock=nil

What did I do ?

  1. I reset completely a new testnet node, and did not enter my validator information at that stage (using release/v2.2.x).
  2. I downloaded 20220725 testnet snapshot and ran the node to full sync.
  3. I started it and waited…

I don’t know what happened around that block time, but it seems tendermint is refusing to validate it, thus making it impossible to progress beyond that block #3920765:
fxcored[70034]: 8:47AM ERR CONSENSUS FAILURE!!! err="+2/3 committed an invalid block: wrong Block.Header.AppHash. Expected 6C93556A367CAF87A7F53539D58E3E646CDDFC44C174460E5A3AEB2AC07E59CD, go>

It looks as if more than 2/3 of the validators did validate block #3920766 with a different AppHash.

Would it be possible to publish a new testnet snapshot so that I can restart my testnet validator above that stage ?
On the other hand, I think it would be interesting to dig in what happened exactly, because that would hurt badly if this was to happen on mainnet.

Thanks.
@FrenchXCore

My testnet face the same issues, i fixed it by clearing out old data, using the 25072022 snapshot too, mine was docker so i did remove all images/containers and follow the upgrade steps

Now it is syncing…

Alright, I’ll retry with a brand new installation from scratch !!

@lancelai

Alright…
I started a brand new server and release/v2.2.x node from scratch and tried to sync it from the latest 25/JUL/2022 snapshot, and I still get the same error at block #3920765.

I also tried to sync without snapshot from Block #0… However, something strange hit my eyes.
The first line below shows a testnet/v2.0.x

fxcored[82297]: 3:01PM INF ABCI Handshake App Info hash= height=0 module=consensus protocol-version=0 server=node software-version=testnet/v2.0.x-46bca00af59765e75902a1e7f07915c87b3b3bfd
fxcored[82297]: 3:01PM INF ABCI Replay Blocks appHeight=0 module=consensus server=node stateHeight=0 storeHeight=0

I’ll just give up for now and wait for next week’s snapshot to update my testnet node.

It would be also really nice for the team to provide mainnet and testnet “state-sync” RPC access points as well (as explained here)

Regards/
@FrenchXCore

1 Like

Its less than 2.5 hours to the upgrade height. If there is anyone who has yet to upgrade and needs some sort of tech support, please reply to this thread!

2 Likes

ill be posting the updates in the main validators thread. for every upgrade, i will spin up a new discussion thread

Hi, I get this error when trying to start the node after update:

panic: precommit step; +2/3 prevoted for an invalid block: wrong Block.Header.AppHash. Expected B9CEBD3B3CCBE9F77BE15A9D2AFBD9FDDA5400D239B363D961831FCDD3B00D22, got 3C3C055D157644D7E3A689CD6413B5AA433DB12E77D49041F55DC5723798E18F

any advice?

@AmirKaplan : you probably made the same mistake as I did. Check here.

@lancelai : sorry for wasting your precious time. I did check on all my mainnet nodes, but did forget to check this path issue on the testnet one…

Regards/
@FrenchXCore

1 Like