Conversation
| credentials_schema: Dict[str, Any] = { | ||
| "type": "object", | ||
| "properties": { | ||
| "credentials_path": { |
There was a problem hiding this comment.
This should support passing in credentials as a JSON string instead of a file path, since we won't be able to use a path when e.g. adding this data source from a Web form (and this schema is used to generate it).
The commandline ergonomics will be awkward but we can add a special from_commandline method that loads and injects the JSON file when invoked from sgr mount, e.g. https://github.com/splitgraph/splitgraph/blob/c37291267ad60d085703b4a3068a8f39a70d2d7d/splitgraph/ingestion/csv/__init__.py#L299-L305
There was a problem hiding this comment.
I've now added the JSON string optional credentials parameter, and implemented from_commandline conversion.
| }, | ||
| "dataset_name": { | ||
| "type": "string", | ||
| "title": "Big Query dataset", |
There was a problem hiding this comment.
It's branded as BigQuery -- can you change it in the descriptions, as well as change the plugin name / package names to bigquery instead of big_query?
There was a problem hiding this comment.
Certainly, I was split about that as well.
Convert json file creds parameter to the raw param when present. Also, align all entity names to bigquery, without underscore.
|
|
||
| @classmethod | ||
| def get_name(cls) -> str: | ||
| return "Google Big Query" |
There was a problem hiding this comment.
| return "Google Big Query" | |
| return "Google BigQuery" |
|
|
||
| @classmethod | ||
| def get_description(cls) -> str: | ||
| return "Query data in GCP Big Query datasets" |
There was a problem hiding this comment.
| return "Query data in GCP Big Query datasets" | |
| return "Query data in GCP BigQuery datasets" |
| credentials_schema: Dict[str, Any] = { | ||
| "type": "object", | ||
| "properties": { | ||
| "credentials": { |
There was a problem hiding this comment.
There's (currently) no point in letting users of the JSONSchema (which is used in form generation) to pass credentials via a path. I think this could be simplified to treat the commandline-passed credential string as a path and the one passed via __init__ as a JSON-serialized credential.
JSONSchema:
"credentials": {
"type": "string",
"title": "GCP credentials",
"description": "GCP credentials in JSON format",
}commandline:
$ sgr mount bigquery bq -o@- <<EOF
{
"credentials": "/path/to/my/creds.json",
"project": "my-project-name",
"dataset_name": "my_dataset"
}
EOF
...
credentials = Credentials({})
with open(params.pop("credentials"), "r") as credentials_file:
credentials_str = credentials_file.read()
params.pop("credentials")
credentials["credentials"] = credentials_str
Add another remote data source plugin, this time for GCP's Big Query.
CU-26udw0h