pokutuna.com

pokutuna

Web Developer / Software Engineer
Hyogo, Japan

Contributions

  • hashicorp/terraform-provider-google

    Validation pattern is narrower than actually used/generated for `google_monitoring_custom_service` and SLO.

    Community Note Please vote on this issue by adding a šŸ‘ reaction to the original issue to help the community and maintainers prioritize this request. Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request. If you are interested in working on this issue or have submitted a pull request, please leave a comment. If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already. Terraform Version Terraform v1.5.7 on darwin_arm64 + provider registry.terraform.io/hashicorp/google v4.82.0 Affected Resource(s) google_monitoring_custom_service google_monitoring_slo Terraform Configuration Files terraform { required_version = "1.5.7" required_providers { google = { source = "hashicorp/google" version = "4.82.0" } } backend "local" { path = "terraform.tfstate" } } # my Google Cloud project provider "google" { project = "pokutuna-playground" } # to be imported import { to = google_monitoring_custom_service.example id = "projects/my-project/services/gs-ReZdgRiuY5DWEldJnSA" } import { to = google_monitoring_slo.example id = "projects/my-project/services/gs-ReZdgRiuY5DWEldJnSA/serviceLevelObjectives/c3nU6dECTzSjFSEmMCyRyA" } Debug Output The following gist includes the output of the operations I actually executed in my Google Cloud project. $ cat main.tf $ TF_LOG=DEBUG terraform plan -generate-config-out=imported.tf $ cat imported.tf $ TF_LOG=DEBUG terraform plan https://gist.github.com/pokutuna/0f84c03e0eb18ac26a91b031afa1a419 Panic Output N/A Expected Behavior The actual existing service_id and slo_id do not trigger validation errors. Actual Behavior When running plan with import, or apply after import, the following validation errors are printed. (Other errors are also included, but they are not mentioned in this issue.) ā”‚ Error: "service_id" ("gs-ReZdgRiuY5DWEldJnSA") doesn't match regexp "^[a-z0-9\\-]+$" ā”‚ ā”‚ with google_monitoring_custom_service.example, ā”‚ on imported.tf line 8, in resource "google_monitoring_custom_service" "example": ā”‚ 8: service_id = "gs-ReZdgRiuY5DWEldJnSA" ā”‚ Error: "slo_id" ("c3nU6dECTzSjFSEmMCyRyA") doesn't match regexp "^[a-z0-9\\-]+$" ā”‚ ā”‚ with google_monitoring_slo.example, ā”‚ on imported.tf line 25, in resource "google_monitoring_slo" "example": ā”‚ 25: slo_id = "c3nU6dECTzSjFSEmMCyRyA" The service_id and slo_id are automatically generated when created from the console. The IDs I used in the example were also automatically generated. In other words, it's validating with a pattern that's narrower than what Cloud Monitoring actually generates. Steps to Reproduce Define a custom service and SLO on the Cloud Monitoring console. Describe the defined resources in the import block. Execute the steps included in the log. $ cat main.tf $ TF_LOG=DEBUG terraform plan -generate-config-out=imported.tf $ cat imported.tf $ TF_LOG=DEBUG terraform plan Important Factoids There's nothing special about my account. I'm using the Application Default Credentials created with gcloud auth application-default login. I suspect that the pattern ^[a-z0-9\\-]+$ is from the following API documentation. Method: services.create Method: services.serviceLevelObjectives.create I believe the pattern in these documents are also incorrect (I've provided feedback on it). The pattern that's actually working on Cloud Monitoring can be obtained from the API error. $ curl -X POST -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" "https://monitoring.googleapis.com/v3/projects/$GOOGLE_PROJECT/services?serviceId=%F0%9F%A5%BA" { "error": { "code": 400, "message": "Resource names must match pattern `^[a-zA-Z0-9-_:.]+$`. Got value \"šŸ„ŗ\"", "status": "INVALID_ARGUMENT" } } Therefore, ^[a-zA-Z0-9-_:.]+$ is the pattern that represents actual possible IDs. We can actually call these API to create a custom service and slo with the ID prefix:lower_UPPER-01.23. $ export GOOGLE_PROJECT=pokutuna-playground $ export ACCEPTABLE_ID=prefix:lower_UPPER-01.23 $ curl -X POST -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" "https://monitoring.googleapis.com/v3/projects/$GOOGLE_PROJECT/services?serviceId=$ACCEPTABLE_ID" -d '{"custom":{}}' -H 'Content-Type: application/json' > { > "name": "projects/744005832574/services/prefix:lower_UPPER-01.23", > "custom": {}, > "telemetry": {} > } $ curl -X POST -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" -H 'Content-Type: application/json' "https://monitoring.googleapis.com/v3/projects/$GOOGLE_PROJECT/services/$ACCEPTABLE_ID/serviceLevelObjectives?serviceLevelObjectiveId=$ACCEPTABLE_ID" -d @- <<JSON { "serviceLevelIndicator": { "requestBased": { "distributionCut": { distributionFilter: "metric.type=\"appengine.googleapis.com/http/server/response_latencies\" resource.type=\"gae_app\"", "range": { "min": 0, "max": 1000 } } } }, "goal": 0.001, "calendarPeriod": "WEEK" } JSON > { > "name": "projects/744005832574/services/prefix:lower_UPPER-01.23/serviceLevelObjectives/prefix:lower_UPPER-01.23", > "serviceLevelIndicator": { > "requestBased": { > "distributionCut": { > "distributionFilter": "metric.type=\"appengine.googleapis.com/http/server/response_latencies\" resource.type=\"gae_app\"", > "range": { > "max": 1000 > } > } > } > }, > "goal": 0.001, > "calendarPeriod": "WEEK" > } References #11696 This PR addresses the issue of importing service in google_monitoring_slo, but it has been left for a year. It seems that the google_monitoring_service and google_monitoring_custom_service use the same API. However, google_monitoring_service does not have service_id validation. mmv1/products/monitoring/Service.yaml (google_moniroting_custom_service) mmv1/products/monitoring/GenericService.yaml (google_moniroting_service)

    pokutuna opened on 2023-09-13
  • googleapis/nodejs-datastore

    Unable to connect to emulator running on docker compose with client 7.5.1

    I have development environments and CI to run Datastore emulator and an application that connect to it on Docker Compose. Those connections are resolved by the service name on the overlay network within it, such as datastore:8081. Since client version 7.5.1, these cannot connect to the emulator. This was triggered by this PR: #1101 When the baseUrl_ does not include a part that means the local network, grpc.credentials.createInsecure is no longer used. This change is for a custom endpoint, and the endpoint is given by the DATASTORE_EMULATOR_HOST environment variable. As a result, authentication is not skipped when the emulator host is something like datastore:8081. To support custom endpoints requiring authentication, how about using a another environment variable instead of reusing the existing one? (like DATASTORE_CUSTOM_HOST) I think users who set the DATASTORE_EMULATOR_HOST are expecting development use and not expecting authentication to be required. Workaround Setting network_mode: "host" and expose the emulator port for joining the host network, we can include localhost in endpoint url. However, this occupies a port on the host and may need to be adjusted. Environment details OS: macOS(host) & Linux(container) Node.js version: v20.2.0 npm version: 9.6.6 @google-cloud/datastore version: 7.5.1 Steps to reproduce Using Docker Compose, set up the Datastore emulator and an application container that uses the client library. You will encounter the error: "Could not load the default credentials." during connection. This is the reproduction code using two different client versions, including docker-compose.yaml: https://gist.github.com/pokutuna/314248d183f6fbfe60154f63751d3655

    pokutuna opened on 2023-06-03
  • GoogleCloudPlatform/magic-modules

    Fix id validation for custom service and SLO to match what's actually usable

    Fixes hashicorp/terraform-provider-google#15825 The validation for the following id fields were too strict, so they have been adjusted to match the actual formats. Please refer to the issue for specific examples. service_id on google_monitoring_custom_service slo_id on google_monitoring_slo I checked this documentation. Types of breaking changes | Magic Modules This change relaxes the validation to contain the current pattern, so I think this is not a breaking change. Do I need to add tests for the validation defined in regex? If so, I'll add it. It would be helpful if you could advise on existing test cases for such a change. Release Note Template for Downstream PRs (will be copied) monitoring: fixed validation of `service_id` on `google_monitoring_custom_service` and `slo_id` on `google_monitoring_slo`

    pokutuna opened on 2023-09-13
  • GoogleCloudPlatform/magic-modules

    Fix broken link in the doc on how to create a PR

    It's a tiny fix for the documentation. I found the following link was broken and fixed it. here: https://googlecloudplatform.github.io/magic-modules/contribute/create-pr/#:~:text=you%20get%20stuck.-,Self%2Dreview%20your%20PR,-or%20ask%20someone This link url has trailing ". Release Note Template for Downstream PRs (will be copied)

    pokutuna opened on 2023-09-13
  • dataform-co/dataform

    Enable formatting for triple-quoted strings

    resolves #1444 In #1489, I worked on improving the formatter but didn't include support for multiline strings using triple-quotes as it required a little bigger changes in the sqlx lexer. In this pull request, I have added new states and tokens to the sqlx lexer to handle triple-quoted multiline strings. While the changes might look complex, what it does is simple: Define a minimal subset of functionality based on how existing quotes are handled. With triple-quoted strings, there's no need to worry about escaping quotes. During formatting, if it's a multi-line string, don't format within it and split by lines. Need to keep internal line breaks and whitespaces intact.

    pokutuna opened on 2023-06-21
  • dataform-co/dataform

    Fix image link in readme.md

    Fix this

    pokutuna opened on 2023-06-21
  • dataform-co/dataform

    Updating sql-formatter will resolve several formatting issues

    Would you like to update sql-formatter? Currently, dataform CLI depends on version ^2.3.3, which is about 4 years old. The latest version is 12.2.1. By updating, we can resolve the following issues related to formatting: #1070 A fix has been merged in sql-formatter-org/sql-formatter#603. (I did it) #1077 The current version of sql-formatter can handle this. #1444 This requires to fix the lexer of this repo as well. (so updating doesn't resolve this) Handling dialects in sql-formatter The current version of sql-formatter have a feature for "dialects" specific to different database products. Now sql-formatter covers all SQL dialects supported warehouse by Dataform. https://github.com/sql-formatter-org/sql-formatter/blob/master/docs/language.md By passing a language value corresponding to the warehouse setting in Dataform to the formatter, we can expect more appropriate formatting. But if we use the neutral default sql that means StandardSQL, formatting for UNNEST and QUALIFY will not work well, and I think it need to refer to the warehouse value in dataform.json. I have also confirmed that the formatting result changes in some cases. The files under examples/formatter/ seems to assume BigQuery, so if we format as BigQuery, The space after IF, IFNULL disappears A space is added after UNNEST A line break is added after ALTER TABLE {name} While sql-formatter updating brings differences in the formatting results, QUALIFY is a frequently used clause, and named arguments are broken by current formatter. The update makes the format command practical. Could you take a look at the pull request I'm going to send here? You may use only some of the commits.

    pokutuna opened on 2023-05-28
  • dataform-co/dataform

    Update sql-formatter & specify SQL language according to warehouse

    resolves #1489

    pokutuna opened on 2023-05-28
  • dataform-co/dataform

    Separate sqlx build target into sqlx and format

    This change is from the conversation here: #1490 (comment) //sqlx includes two functionalities: a lexer for sqlx files and a formatter. In #1490, I worked on improving the formatter, but encountered a circular dependency. //core -> //sqlx (using lexer) //sqlx -> //core (using adapters) As @BenBirt suggested, I will separate the sqlx target into two parts to solve the circular deps. With this PR, the dependencies are as follows //sqlx:format -> //sqlx:sqlx //sqlx:format -> //core:core //core:core -> //sqlx:sqlx

    pokutuna opened on 2023-06-06
  • sql-formatter-org/sql-formatter

    [FORMATTING] Named arguments in BigQuery

    Input data Which SQL and options did you provide as input? SELECT MAKE_INTERVAL(1, day=>2, minute => 3) Expected Output SELECT MAKE_INTERVAL(1, day => 2, minute => 3) Actual Output SELECT MAKE_INTERVAL(1, day = > 2, minute = > 3) Usage How are you calling / using the library? $ npx -p sql-formatter@12.2.0 sql-formatter -l bigquery file.sql What SQL language(s) does this apply to? BigQuery Which SQL Formatter version are you using? 12.2.0 (latest) BigQuery supports named arguments in several functions. https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-reference#named_arguments Named arguments are sometimes used in JSON functions and Geography functions. Reading at the PostgreSQL implementation, it is not difficult to just add it to operators, so I'll try to PR it. If the operator should work only in the function call, reject it.

    pokutuna opened on 2023-05-20