NCFP-10: Adding reachability checks for queued verifiers

Currently a lot of verifiers in queue seem to be offline and show up in red on the status page - they just send occasional join messages and stay in queue this way.

While this is allowed by the code, it creates an unfair advantage over regular users running the verifier on a VPS.

I propose to add reachability checks for queued verifiers - if they fail to respond to a certain message too often their waiting time is reset.

This is just a symbolic NCFP over 1 Nyzo to see if there is support for this proposal.

6 Likes

That basically already exist (?) but you are suggesting that its more stricter than it is now…

Since a “reachability” status has to have a consensus among all incycle verifier this means that every one of them has to communicate with all of the queue nodes more often than they are now. Will there be any issues with queue nodes / verifiers nodes (not insignificant) resources increase with this?

From what I recall on the history of cycle entry (very basic):

In the beginning the queue nodes had to be in sync with cycle which led to cycle instability.
Then they had to be in sync among each other which led to queue instability.
Then they didn’t have to track the chain which led to them not being fast enough to sync and enter whent hey were chosen.
Then we got a sentinel assisted entry because “regular users” couldn’t join.
Now we got massively efficient batches of queue nodes running cheaper than the “regulars”.

If the reachability rules do get stricter In the end doesn’t this only make it a bit more expensive for the ones with “unfair” advantage at the cost of potentially increased queue instability and more resources needs for incycle (and queue - but that’s what you want here). Even with stricter rules, the user running a single nyzo-tenant VPS will still run way more expensive vs. a containerized setup.

Besides, if one can rotate node’s availability with one pattern now, how do you measure the difference in their costs if the pattern just needs to get slightly modified for the new rules? Would love some input from the core devs on possibilities and trade offs, maybe they now can see something that wasn’t there while everything led to where we are now with entry process. Currently it sounds like a potential hit and miss … at the cost of development time.

3 Likes

It will always be an Arms Race to a certain Degree. The Question is when does this lead to a Situation where the POD Idea breaks. We see currently that people have a Setup running that give them an “unfair” advantage. 50% of all joins recently are done by one Party, which is no immediate problem for the Cycle itself, due to the current Size, but harms the Perception and chance to join of other individuals.
I would like to see this NCFP-10 to get the chances back to a level where no one have >50% of joins due to a "unfair"advantage.

1 Like

@gunray there is no test atm.
Any ip sends you the right message, you add it to your local “nodes” files, and it’s your personal view of the queue. This is currently clearly being abused.

Idea here is not to define what precise test to do or how to react - yet - but in a first step to tell whether this behaviour is ok or not.

To stick with simple core rules, like nyzo relies on for everything:
right now, the condition to be in queue and eligible for a join is very relaxed, and could be written as
“be able to send a tcp message from a given ipv4 address”

You don’t need a verifier running on that IP, you don’t need to have full control of that IP to do that.
proxies, VPN, webhosting… many services from third parties allow you to do that with tips you do not own.
This also allows some technical users to send messages from full class c - and more - IP networks, with no real verifier running, just fake messages sending.

If the proposal was to pass, then we could discuss ways to move to a slightly more restrictive basic rule, that could be written as
“be able to send a tcp message from a given ipv4 address and answer status messages”
for instance.

This would imply no consensus change, every verifier still would have its own local “nodes” files, just the core rules to be added or removed from this file would be very slightly different.

So, current question is just “are we ok with current abuses of the relaxed join rule, or do we want to make it harder for mass fakers to join the network”.
There is no need to waste time on counter measures now if the cycle is fine with current working, why this NCFP.

2 Likes

For these C class 256 IPs, I don’t think they are fake nodes, or even that I can tell they are using the A container like docker and pay for the servers and IPs. The question then is whether we are allowed to the way using Docker to build Nyzo nodes, and if so, then there is no meaning discussion here! if it’s not allowed, then we should make another set of rules.
If the requirement is to grow the online time, then they can just change docker nodes‘ online time, that don’t essentially change the phenomenon.
I’m not a techguy and have only few candidate nodes ($2/VPS) in queue now, but I don’t think this type of unfair. If you want to get low cost then be diligent and use Docker’s approach or go negotiate with a service provider for a low price VPS. And the iP4 has always been a rarity, that hasn’t changed.
But the current proposal is kind of similar to for example mining POW coins, I only have CPU and you have GPU, I don’t think it’s fair, I think GPU is fake CPU and abusing GPU, let’s vote to boycott GPU.
(Regarding avoiding centrality, The odds are still the same per node, I can accept the $2 cost of a VPS and for 200 VPS with a chance of one node joining into cycle per month.It is an evolutionary process, and when we use alias private keys, there will be candidate node service providers. They can offer low-cost candidate nodes to users, similar to this type of service, but the user holds the original private key. That’s another topic )

I don’t think it’s a docker/ non docker issue.
It’s a working verifier/non working verifier question.
The class C themselve are not the main concern, that can be ok and legit.
But having red verifiers joining 80% of the time and migrating on join is not right.
Back in time, we considered having a quality check on queued verifiers, to make sure they will be able to sustain in cycle rate when joining.
This has lost traction with sentinels.
In current case, If these ips were running real verifiers, they would all be white, would not need to migrate on join, and no question would be asked.
It’s obvious by looking at the queue that most of it is composed of class c network in red state.
That speaks for itself.

1 Like

The point is that these IPs do not respond all the time. They are online for a short amount of time, send the join message, then go offline for hours.

This can be done with any software and can save a large portion of operation costs.

The original idea was to reward the queue verifiers for waiting, but these are not waiting, they show up occasionally because this is what the current implementation allows. Its not what it was meant to be.

I’m willing to support the proposal, but would like to have specific options to discuss (it doesn’t seem simple to be thoughtful), such as considering a few points.
1, does not add unnecessary burden to the cycle mesh
2, technical and nontechnical approach both welcome (e.g. candidate nodes must with some coins?)
Do not be counterproductive.
I look forward to more interesting responses.

1 Like

I proposed this on discord a few days ago:

All in cycle verifiers pick a random queue verifier every 5 seconds and send a request to it. They keep counters on how many requests were answered and failed.
If it has waited 30+ days and has less than 75% success rate all stats are reset - counters and waiting time.

Compared to the usual load this is <1% extra load on the cycle.

1 Like

yxiuz: This is only to vote whether or not this should be researched.
Not to enforce a specific solution, but decide whether looking for a solution is worth it or not.

If the cycle thinks it’s fair, there is no need to spend time on that.
So, we’re not to specific proposals, constraints or technic there, just: is that something to research or not.
I’d prefer we do not enter into specific implementations that will fuzz the question.

Several specific proposals will be researched and made IF this first step gets YEs vote.
Nothing will be done if it gets NO vote.

(Re-reading the proposal, I get it’s not exactly what it says. Maybe amending it so it goes in that way, whether to research valid solutions or not, not blindly enforce one right away, could lead to easier adoption)

1 Like

IMO, they are. Like laws system, the backbone of any law system can’t cover all of the cases. But when people find out something “not right”, they make precedent and “update” the law with more act, decree, covenant to strengthen the system and prevent future damage.

Disagree. To do that, they have to order much stronger servers to handle the jobs.

Disagree. There’re a lot of coins switched algo from ASIC/FPGA to GPU, GPU to CPU (do you really need me to name them?) because the project is community-driven, so Nyzo is.

Fallacy. Your argument should be: with $400/month you have 200 tickets to join incycle while they have 2048 tickets to join (10 times more than normal). And that number is not acceptable to me.

It’s weird that you said you aren’t a tech guy but you tend to discuss deeply the tech solution.
I’d say it does. If a node is fully functional, there’re more than 20 different types of messages for them to answer, so you tell me how many ways to check them.
But first, let get this NCFP approved and then we’ll talk about how to do.

1 Like

I’ll go as far as to propose there could be a simple proof of work test done on the machines that are trying to join cycle. Something that would guarantee that they will be sufficiently powerful to handle what is required.

This would be also helpful for users in determining whether it makes any sense for them to try join with a particular VPS configuration.

I still remember trying to join with a few really weak machines at the beginning and being kicked right after joining.

One of the specific solutions to this problem can be discussed at NCFP-11
https://forum.nyzo.community/t/ncfp-11-candidate-verifier-must-hold-in-some-nyzo-coin/300

I don’t really see too deep into the individual exploits, does this solve the connectivity / capacity issue?

Here is the command you need to vote for this. Run on a sentinel that watches your in cycle verifiers:

Vote for it:
cd nyzoVerifier ; java -jar build/libs/nyzoVerifier-1.0.jar co.nyzo.verifier.scripts.CycleTransactionSignScript sig_gc_cW0NIE.eEeSd1eBqEQGcNMoX1QaW3tR0v85PDJX15smZcD8GZJ43a6qQtDQjfYA.xtsBBnSLLZyDv9tyjGgtK4mom 1

Vote against it:
cd nyzoVerifier ; java -jar build/libs/nyzoVerifier-1.0.jar co.nyzo.verifier.scripts.CycleTransactionSignScript sig_gc_cW0NIE.eEeSd1eBqEQGcNMoX1QaW3tR0v85PDJX15smZcD8GZJ43a6qQtDQjfYA.xtsBBnSLLZyDv9tyjGgtK4mom 0

Analysis and solutions will make this proposal more attractive (I know this is a symbolic proposal :wink:).

Randomness is in god’s hands. Don’t meddle with the entrance mechanism but analyze it to assure proper randomness is achieved.

You can help the process of selection by helping other people come closer to randomness.