import os
# To get the directory of the script/file:
current_dir = os.path.dirname(os.path.realpath(__file__))
# To get one directory up from the current file
parent_dir = os.path.abspath(os.path.join(current_dir, ".."))
Discover gists
Mobile Legend (ML) | |
tcp: 5000-5221,5224-5227,5229-5241,5243-5508,5551-5559,5601-5700,9001,9443 | |
tcp: 10003,30000-30300 | |
udp: 4001-4009,5000-5221,5224-5241,5243-5508,5551-5559,5601-5700 | |
udp: 2702,3702,8001,9000-9010,9992,10003,30190,30000-30300 | |
Free Fire (FF) | |
tcp: 6006,6674,7006,7889,8001-8012,9006,10000-10012,11000-11019,120006,12008,13006 | |
tcp: 39003,39006,39698,39779,39800 | |
udp: 6006,6008,7008,8008,9008,10000-10013,10100,11000-11019,12008,13008 |
#!/usr/bin/env bash | |
## Author: AbidΓ‘n Brito | |
## This script builds GNU Emacs 29.1 with support for native elisp compilation, | |
## tree-sitter, libjansson (C JSON library), pure GTK and mailutils. | |
# Exit on error and print out commands before executing them. | |
set -euxo pipefail | |
# Let's set the number of jobs to something reasonable; keep 2 cores |
Code is clean if it can be understood easily β by everyone on the team. Clean code can be read and enhanced by a developer other than its original author. With understandability comes readability, changeability, extensibility and maintainability.
- Follow standard conventions.
- Keep it simple stupid. Simpler is always better. Reduce complexity as much as possible.
- Boy scout rule. Leave the campground cleaner than you found it.
- Always find root cause. Always look for the root cause of a problem.
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
- Clone llama.cpp from git, I am on commit
08737ef720f0510c7ec2aa84d7f70c691073c35d
.
People
:bowtie: |
π :smile: |
π :laughing: |
---|---|---|
π :blush: |
π :smiley: |
:relaxed: |
π :smirk: |
π :heart_eyes: |
π :kissing_heart: |
π :kissing_closed_eyes: |
π³ :flushed: |
π :relieved: |
π :satisfied: |
π :grin: |
π :wink: |
π :stuck_out_tongue_winking_eye: |
π :stuck_out_tongue_closed_eyes: |
π :grinning: |
π :kissing: |
π :kissing_smiling_eyes: |
π :stuck_out_tongue: |
There are some cases on which an engineer need to setup more than one GitHub account on the local env, like a personal and profesional accounts. This post goal is to guide on setting 2 accounts, the default one (let's say your personal) and another one (probably a professional) by setting these on the shh config file.
Let's assume you already have your personal account setup and working. Now it's time to setup the addional account:
Follow Generating a new SSH key document in order to generate a new SSH key on your local machine for your 2nd account.
[!NOTE]
apiVersion: v1 | |
kind: Pod | |
spec: | |
# dnsConfig: | |
# options: | |
# - name: ndots | |
# value: "1" | |
containers: | |
- name: dind | |
image: abdennour/docker:19-dind-bash |
# https://docs.microsoft.com/en-us/graph/powershell/installation | |
# https://docs.microsoft.com/en-us/graph/powershell/get-started | |
# https://developer.microsoft.com/en-us/graph/graph-explorer | |
# https://tech.nicolonsky.ch/exploring-the-new-microsoft-graph-powershell-modules/ | |
Connect-MgGraph -Scopes "User.Read.All", "GroupMember.Read.All", "Group.Read.All", "Directory.Read.All" | |
$GROUP_NAME = 'Workspace' | |
$group = Get-MgGroup -Filter "DisplayName eq '$GROUP_NAME'" |
// ts 3.6x | |
function debounce<T extends Function>(cb: T, wait = 20) { | |
let h = 0; | |
let callable = (...args: any) => { | |
clearTimeout(h); | |
h = setTimeout(() => cb(...args), wait); | |
}; | |
return <T>(<any>callable); | |
} |