The res.config.settings
model is transient and is not saved to the database. Any fields you define on this model must be saved to the database somewhere explicitly. There are a few options.
Discover gists
// bitset_iter.h v1.1.0 | |
// Copyright 2019, Diego Dagum | |
// | |
// Permission is hereby granted, free of charge, to any person obtaining a copy | |
// of this software and associated documentation files (the "Software"), to deal | |
// in the Software without restriction, including without limitation the rights | |
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | |
// copies of the Software, and to permit persons to whom the Software is | |
// furnished to do so, subject to the following conditions: | |
// |
# Exponential backoff in Ruby | |
begin | |
make_request | |
rescue RequestError => e | |
if retries <= max_retries | |
retries += 1 | |
sleep 2 ** retries | |
retry | |
else | |
raise "Timeout: #{e.message}" |
# | |
# MIT License | |
# | |
# To use, assign keys to the "focus_most_recent_tab_closer" command, e.g. | |
# {"keys": ["ctrl+k", "ctrl+w"], "command": "focus_most_recent_tab_closer"}, | |
# | |
import sublime | |
import sublime_plugin | |
import time |
Note
to active Office without crack, just follow https://github.com/WindowsAddict/IDM-Activation-Script,
you wiil only need to run
irm https://massgrave.dev/ias | iex
Introduction:
Building prompts for one million token context windows necessitates a complete reimagining of how prompts are created, signaling a pivotal transformation in artificial intelligence with the introduction of Google's Gemini 1.5. This groundbreaking advancement, featuring an extensive context window of 1 million tokens, challenges us to devise innovative approaches like hypergraph prompting. This method intricately weaves together the spatial, temporal, relational, and executional dimensions of data, creating a visual and logical fabric of connections that mirrors the interconnected spirals of a DNA strand, to navigate and effectively leverage this vast informational expanse.
Understanding the Scale of a 1 Million Token Context Window:
Imagine a context window of 1 million tokens as a vast library containing hundreds of books, thousands of pages, or hours of multimedia content, all accessible in a sing
#imports | |
from PIL import Image | |
import numpy | |
import pytoshop | |
from pytoshop.user import nested_layers | |
from pytoshop import enums | |
from pytoshop.image_data import ImageData | |
#Convert Pillow image to pyto layer (not fool proof by anymeans) | |
def pillow_image_to_pyto_image(name, image, x, y, visible, opacity, group_id, blend_mode, metadata, layer_color, color_mode, width = 0, height = 0): |
/** | |
* Generate all the possible combinations among a set of nested arrays. | |
* | |
* @param array $data The entrypoint array container. | |
* @param array $all The final container (used internally). | |
* @param array $group The sub container (used internally). | |
* @param mixed $val The value to append (used internally). | |
* @param int $i The key index (used internally). | |
*/ | |
function generate_combinations(array $data, array &$all = array(), array $group = array(), $value = null, $i = 0) |