Skip to content

Instantly share code, notes, and snippets.

@morningreis
morningreis / proton_opn_wg.md
Created December 16, 2022 21:26
OPNsense + ProtonVPN + Wireguard Configuration Guide

OPNsense + ProtonVPN + Wireguard

Published: 16 December 2022

Reference: https://docs.opnsense.org/manual/how-tos/wireguard-selective-routing.html

Goal: Set up one or more Wireguard connections from ProtonVPN on OPNsense, with policy based routing, and optional Killswitch.

I'm writing this guide first as a reference for my future self for when I inevitably forget how to do this, but also to help others out. I found there were not many guides on this specific configuration, particularly not with multiple concurrent connections, and these were some steps which were not at all obvious. I did begin with the guide in the official OPNsense documentation, but even that was missing info to make ProtonVPN work. If you are a pfSense user, it is very similar to OPNsense, and you should be able to follow along with some success, but I have not tested it myself.

@DzeryCZ
DzeryCZ / ReadingHelmResources.md
Last active May 8, 2024 19:31
Decoding Helm3 resources in secrets

Helm 3 is storing description of it's releases in secrets. You can simply find them via

$ kubectl get secrets
NAME                                                TYPE                                  DATA   AGE
sh.helm.release.v1.wordpress.v1                     helm.sh/release.v1                    1      1h

If you want to get more info about the secret, you can try to describe the secret

$ kubectl describe secret sh.helm.release.v1.wordpress.v1
import SwiftUI
import Combine
protocol RandomNumberServiceProtocol {
func makeRandomInt() -> Int
}
final class RandomNumberService: RandomNumberServiceProtocol {
// MARK: - RandomNumberServiceProtocol
func makeRandomInt() -> Int {
@AlessandraSozzi
AlessandraSozzi / index.html
Last active May 8, 2024 19:26
D3 drag and drop: manually reorder rows and columns of a matrix
<!DOCTYPE html>
<meta charset="utf-8">
<head>
<style>
.background {
fill: #fff;
}
rect {
stroke: #fff;
@brentjanderson
brentjanderson / Howto.md
Created February 20, 2018 17:55
SSH Tunneling with Firefox

Sometimes it is useful to route traffic through a different machine for testing or development. At work, we have a VPN to a remote facility that we haven't bothered to fix for routing, so the only way to access a certain machine over that VPN is via an SSH tunnel to a machine that is reachable over the VPN. Other times, I have used this technique to test internet-facing requests against sites I am developing. It is pretty easy, and if you don't use firefox regularly, you can treat Firefox as your "Proxy" browser and other browsers can use a normal configuration (Although you can also configure an entire system to use the proxy, other articles exists that discuss this potential).

  1. Open a terminal
@jjb
jjb / file.md
Last active May 8, 2024 19:22
Using Jemalloc 5 with Ruby.md

For years, people have been using jemalloc with ruby. There were various benchmarks and discussions. Legend had it that Jemalloc 5 didn't work as well as Jemalloc 3.

Then, one day, hope appeared on the horizon. @wjordan offered a config for Jemalloc 5.

Ubuntu/Debian

FROM ruby:3.1.2-bullseye
RUN apt-get update ; \
@aamiaa
aamiaa / CompleteDiscordQuest.md
Last active May 8, 2024 19:19
Complete Recent Discord Quest

Complete Recent Discord Quest

Note

This no longer works in browser!

Note

This no longer works if you're alone in vc! Somebody else has to join you!

How to use this script:

  1. Accept the quest under User Settings -> Gift Inventory
@vinicius-stutz
vinicius-stutz / README.md
Last active May 8, 2024 19:19
Máscara p/ telefones com 8 ou 9 dígitos (jquery.mask.js)
@TengdaHan
TengdaHan / ddp_notes.md
Last active May 8, 2024 19:15
Multi-node-training on slurm with PyTorch

Multi-node-training on slurm with PyTorch

What's this?

  • A simple note for how to start multi-node-training on slurm scheduler with PyTorch.
  • Useful especially when scheduler is too busy that you cannot get multiple GPUs allocated, or you need more than 4 GPUs for a single job.
  • Requirement: Have to use PyTorch DistributedDataParallel(DDP) for this purpose.
  • Warning: might need to re-factor your own code.
  • Warning: might be secretly condemned by your colleagues because using too many GPUs.