Note
This no longer works in browser!
Note
This no longer works if you're alone in vc! Somebody else has to join you!
How to use this script:
- Accept the quest under User Settings -> Gift Inventory
#!/usr/bin/env bash | |
# Abort sign off on any error | |
set -e | |
# Start the benchmark timer | |
SECONDS=0 | |
# Repository introspection | |
OWNER=$(gh repo view --json owner --jq .owner.login) |
Install VMWare Workstation PRO 17 (Read it right. PRO!) | |
Also, these keys might also work with VMWare Fusion 13 PRO. Just tested it. | |
Sub to me on youtube pls - PurpleVibe32 | |
if you want more keys - call my bot on telegram. @purector_bot (THE BOT WONT REPLY ANYMORE) - Or: https://cdn.discordapp.com/attachments/1040615179894935645/1074016373228978277/keys.zip - the password in the zip is 102me. | |
--- | |
This gist can get off at any time. | |
PLEASE, DONT COPY THIS. IF YOU FORK IT, DONT EDIT IT. | |
*If you have a problem comment and people will try to help you! | |
*No virus |
ffmpeg -re -listen 1 -i rtmp://127.0.0.1:1935 -y -filter:v fps=3 -an -sn -fps_mode drop -f image2pipe - |
#!/bin/bash | |
docker ps --format "{{.Names}}" | while read container; do | |
ip=$(docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "$container") | |
printf "%-40.40s\t%s\t%s\n" "$container" "$ip" | |
done |
# Git | |
.git | |
.gitignore | |
.gitattributes | |
# CI | |
.codeclimate.yml | |
.travis.yml | |
.taskcluster.yml |
# rawhid_code.py -- copy to CIRCUITPY as "code.py" | |
# don't forget to install rawhid_boot.py as "boot.py" and press reset | |
# works with report IDs up to 63 byte report count | |
# test with hidapitester like: | |
# ./hidapitester --usagePage 0xff00 --usage 1 --open -l 64 --send-output 2,3,4,5 --timeout 1000 --read-input 1 | |
# adapted from code presented here: | |
# https://github.com/libusb/hidapi/issues/478 | |
import time | |
import usb_hid |
This is a full account of the steps I ran to get llama.cpp
running on the Nvidia Jetson Nano 2GB. It accumulates multiple different fixes and tutorials, whose contributions are referenced at the bottom of this README.
At a high level, the procedure to install llama.cpp
on a Jetson Nano consists of 3 steps.
gcc 8.5
compiler from source.""" | |
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
BSD License | |
""" | |
import numpy as np | |
# data I/O | |
data = open('input.txt', 'r').read() # should be simple plain text file | |
chars = list(set(data)) | |
data_size, vocab_size = len(data), len(chars) |