Hashicorp Vault is a tool for managing our Credentials. It has support for multiple secrets. Secrets can be stored, dynamically generated, and in the case of encryption, keys can be consumed as a service without the need to expose the underlying key materials.
Enabling access to Vault requires the installation…
Lets take a data frame with multiple columns and rows just like the above table
Now let’s try to get what are vehicles owned by each person in a new data frame. So for that we need basically convert this data frame to a new dataframe. One column will have name and another column will have the vehicle in an array format.
We had an issue in AWS ECS cluster while running cloudwatch agent. In recent version of cloudwatch agent a new Iptables rule gets added which prevent the cloudwatch agent to communicate with cloudwatch. To make things working in ECS cluster we have to remove this rule
But Cloudwatch agent starts only after execution of init script in EC2 machine, which follows the exit code. So normal sleep command will not work in the userdata script.
For which we ran below sh file through scheduling a job through “at now” command. put the below command in a sh file.
When We try to create the ECS cluster through the terraform with enabling capacity provider in a new account without any normal cluster created earlier, then we get this error as mentioned below.
Error: error creating capacity provider: ClientException: ECS Service Linked Role does not exist. Please create a Service linked role for ECS and try again.
For Fixing this either we can create a service linked role or create a cluster without capacity provider and delete it after. Both the methods will create a role which will be later used in capacity provider.
We had requirement to make our Application Loadbalancer highly available and also to remove dependency from the DNS provider. Let me just explain little further. We have our DNS provider which is not AWS route53, so when we drop into any incident or DR(Disaster Recovery) then we have to reconfigure…
We have a rabbitmq cluster which has 3 nodes attached to it. We usually monitor the cluster with the prometheus and scrape metrics through rabbitmq_exporter. We started moving our orchestration stack from a docker swarm to ECS platform. …
In certain time we need to capture network trace to find out slowness or timeout issue. If this happens to be your production server then its more difficult. We came into same situation where we want to know how much time its taking for each connection or packet transfer.
We have a storage account in subscription which is encrypted with KeyVault. We migrated the subscription from one tenant to other. We faced one issue while accessing the storage account data the as the KeyVault was not migrated after subscrition migration. Then we migrated the keyvault from one tenant to…
We can get to know the PV drivers version for all the Windows instances running in your inventory. For that we first attach the instance to System manager. After that choose below AWS-RunPowerShellScript
We found one scenario where deleting premium service bus have a bug from Azure End. Its keep giving below error.
DR config cannot be deleted because replication is in progress. Please failover or break pairing before attempting to delete the DR Config. For more information visit https://aka.ms/eventhubsarmexceptions.