I have a Cloud Function (Python) which is triggered by http from web client, it has to calculate something and respond FAST. I would like to save the http request parameters into a database (for analytics). If i just initiate a WRITE to my postgresql, the function will have to wait for it, and it will be slower. Using PubSub,
Tag: google-cloud-pubsub
Google PubSub – Ack a message using ack_id
I have an architecture made of: PubSub Topic ‘A’ Subscription ‘B’ on topic ‘A’ that pushes messages to the endpoint ‘X’ Cloud Function ‘C’ triggered by the endpoint ‘X’, runtime in Python Everytime a new message is published on the topic ‘A’, the subscription ‘B’ pushes it to the endpoint ‘X’ that triggers the Cloud Function ‘C’. The problem I’m
Big Query how to change mode of columns?
I have a Dataflow pipeline that fetches data from Pub/Sub and prepares them for insertion into Big Query and them writes them into the Database. It works fine, it can generate the schema automatically and it is able to recognise what datatype to use and everything. However the data we are using with it can vary vastly in format. Ex: