Skip to content
Advertisement

Tag: kubernetes

EKS/AKS cluster name convention

I am writing a script that receives a Kubernetes context name as an input and outputs the different elements of the cluster -> Output: This will support only the three main providers (GKE, EKS, AKS). My question is: What are the different elements of EKS and AKS context names? Answer You need to differentiate between the correct name of the

Airflow 2.2.2 remote worker logging getting 403 Forbidden

I have a setup where airflow is running in kubernetes (EKS) and remote worker running in docker-compose in a VM behind a firewall in a different location. Problem Airflow Web server in EKS is getting 403 forbidden error when trying to get logs on remote worker. Build Version Airflow – 2.2.2 OS – Linux – Ubuntu 20.04 LTS Kubernetes 1.22

airflow on kubernetes: How to install extra python packages?

i installed airflow using bitnami repo: To install extra python packages i mounted an extra volume I prepared my requirements.txt file and then i created a ConfigMap using kubectl create -n airflow configmap requirements –from-file=requirements.txt after this i upgraded airflow using helm upgrade…. But in my dags file, i’m still getting the error “ModuleNotFoundError: No module named ‘yfinance'” Answer Posting

Do we still use api endpoint health check in Kubernetes

I want to use API endpoint health check like /healthz or /livez or /readyz. I recently heard about it, tried to search but didn’t find any place which tell me how we use it. I see that /healthz is deprecated. Can you tell me if have to use it in a pod definition file or we have to use it

How to access a content of file which is passed as input artifact to a script template in argo workflows

I am trying to access the content(json data) of a file which is passed as input artifacts to a script template. It is failing with the following error NameError: name ‘inputs’ is not defined. Did you mean: ‘input’? My artifacts are being stored in aws s3 bucket. I’ve also tried using environment variables instead of directly referring the artifacts directly

How to write data to Delta Lake from Kubernetes

Our organisation runs Databricks on Azure that is used by data scientists & analysts primarily for Notebooks in order to do ad-hoc analysis and exploration. We also run Kubernetes clusters for non spark-requiring ETL workflows. We would like to use Delta Lakes as our storage layer where both Databricks and Kubernetes are able to read and write as first class

Unable to expand cluster by dask

I am very new to kubernetes & dask and trying to implement some kube cluster and have created minikube cluster with some services, further want to expand it with flexible dask functionality. I am planning to deploy it to gcloud somehow later, so i am trying to initialize dask cluster (scheduler and workers to my minikube cluster) from a pod

Advertisement