Skip to content
This repository was archived by the owner on Apr 11, 2020. It is now read-only.

Enhance task building, add IPFS support#20

Open
bigs wants to merge 5 commits intomasterfrom
feat/ipfs-node
Open

Enhance task building, add IPFS support#20
bigs wants to merge 5 commits intomasterfrom
feat/ipfs-node

Conversation

@bigs
Copy link
Copy Markdown
Contributor

@bigs bigs commented Jul 23, 2019

This is an early-stage implementation of an IPFS support. The changes, summarized:

  • Update the task building API to use a Haskell conduit or JavaScript pull-stream style control flow, where subsequent stages can depend on previous stages being scheduled at the time of task generation.
  • Extend Task signature to include a Consul client, for more imperative dynamic configuration.
  • Add initial IPFS plugin, based on Docker.

There are some known problems I will work past this week:

  • IPFS nodes need to log their peer IDs in consul like libp2p daemons. This will allow us to create fully qualified Bootstrap addresses with peer IDs embedded.
  • Task function should change to TaskGroup. This will be more flexible for a variety of reasons. Most importantly, it will enable us to configure each individual node directly. Whereas the p2pd can schedule one Task with N instances, relying on the p2pd's ability to generate a unique peer ID for each peer, we can't rely on that for IPFS, which has a rather "all-or-nothing" policy when it comes to configuration, particularly w.r.t. the Docker image. For IPFS, our task group will create N tasks with custom configurations.

This is now ready for review. It can definitely use cleaning, but it correctly schedules IPFS clusters!

@bigs bigs requested a review from jimpick July 23, 2019 00:24
@bigs bigs changed the title [WIP] Enhance task building, add IPFS support Enhance task building, add IPFS support Jul 24, 2019
postDeployFuncs := phase.PostDeploymentFuncs
logrus.Infof("scheduling phase %d...", i)
resp, _, err := t.nomad.Jobs().Register(job, nil)
if err == nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to not just return if there is an error and move the happy path logic out of the if statement?

if err == nil {
logrus.Infof("rendering topology in evaluation id %s took %s", resp.EvalID, resp.RequestTime.String())
_, err = deploymentFile.WriteString(fmt.Sprintf("%s\n", *job.ID))
if err == nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question here about moving the happy path out of the if?


task.Services = []*napi.Service{
{
Name: p2pd.Libp2pServiceName,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this referring to the IPFS swarm endpoint?

@jimpick
Copy link
Copy Markdown
Collaborator

jimpick commented Jul 25, 2019

I tried running the test2.json ipfs example in my vagrant cluster and it worked first try!

It scheduled all the 4 ipfs docker images into the 3rd machine in the cluster though - I was expecting them to be spread across my 3 virtual machines.

@jimpick
Copy link
Copy Markdown
Collaborator

jimpick commented Jul 25, 2019

Ah, I see, a "TaskGroup" will schedule all of it's tasks on the same client.

I tried creating 6 different "ipfspeers" groups, and it schedules them on multiple clients, but I still had one client with no tasks.

Running "nomad alloc status -verbose " shows zero for "job-anti-affinity", so that might be a factor.

@jimpick
Copy link
Copy Markdown
Collaborator

jimpick commented Jul 25, 2019

I think it might need a "spread" in there somewhere to evenly schedule the docker instances across the clients.

https://github.com/hashicorp/nomad/blob/master/website/source/guides/operating-a-job/advanced-scheduling/spread.html.md

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants