Conversation
Also adds support for unique IPFS peer ID generation
| postDeployFuncs := phase.PostDeploymentFuncs | ||
| logrus.Infof("scheduling phase %d...", i) | ||
| resp, _, err := t.nomad.Jobs().Register(job, nil) | ||
| if err == nil { |
There was a problem hiding this comment.
Is there a reason to not just return if there is an error and move the happy path logic out of the if statement?
| if err == nil { | ||
| logrus.Infof("rendering topology in evaluation id %s took %s", resp.EvalID, resp.RequestTime.String()) | ||
| _, err = deploymentFile.WriteString(fmt.Sprintf("%s\n", *job.ID)) | ||
| if err == nil { |
There was a problem hiding this comment.
Same question here about moving the happy path out of the if?
testlab/node/ipfs/ipfs.go
Outdated
|
|
||
| task.Services = []*napi.Service{ | ||
| { | ||
| Name: p2pd.Libp2pServiceName, |
There was a problem hiding this comment.
Is this referring to the IPFS swarm endpoint?
|
I tried running the test2.json ipfs example in my vagrant cluster and it worked first try! It scheduled all the 4 ipfs docker images into the 3rd machine in the cluster though - I was expecting them to be spread across my 3 virtual machines. |
|
Ah, I see, a "TaskGroup" will schedule all of it's tasks on the same client. I tried creating 6 different "ipfspeers" groups, and it schedules them on multiple clients, but I still had one client with no tasks. Running "nomad alloc status -verbose " shows zero for "job-anti-affinity", so that might be a factor. |
|
I think it might need a "spread" in there somewhere to evenly schedule the docker instances across the clients. |
This is an early-stage implementation of an IPFS support. The changes, summarized:
conduitor JavaScriptpull-streamstyle control flow, where subsequent stages can depend on previous stages being scheduled at the time of task generation.Tasksignature to include a Consul client, for more imperative dynamic configuration.There are some known problems I will work past this week:
IPFS nodes need to log their peer IDs in consul like libp2p daemons. This will allow us to create fully qualifiedBootstrapaddresses with peer IDs embedded.Taskfunction should change toTaskGroup. This will be more flexible for a variety of reasons. Most importantly, it will enable us to configure each individual node directly. Whereas the p2pd can schedule one Task with N instances, relying on the p2pd's ability to generate a unique peer ID for each peer, we can't rely on that for IPFS, which has a rather "all-or-nothing" policy when it comes to configuration, particularly w.r.t. the Docker image. For IPFS, our task group will create N tasks with custom configurations.This is now ready for review. It can definitely use cleaning, but it correctly schedules IPFS clusters!