sisyphean

joined 2 years ago
MODERATOR OF
[–] sisyphean@programming.dev 2 points 2 years ago

Currently it's only possible by replying to your own comment and mentioning the bot in it.

[–] sisyphean@programming.dev 4 points 2 years ago

I think I will add an exception to the bot for medium.com pages that explains this briefly.

[–] sisyphean@programming.dev 5 points 2 years ago (1 children)

Unfortunately Medium uses some tricks that make it almost impossible to scrape the content.

(Honestly I have no idea why so many people use that site, it’s full of dark patterns just like Reddit, Quora, etc.)

[–] sisyphean@programming.dev 2 points 2 years ago

Oh, I see. Now it answered my JavaScript question.

Though the "..." also had its mysterious charm :)

[–] sisyphean@programming.dev 3 points 2 years ago (1 children)

Thank you! I posted it from my phone and I didn't see. I updated the post to use the direct link.

[–] sisyphean@programming.dev 1 points 2 years ago

Wow, an actually good summary of what the problem is with Reddit

[–] sisyphean@programming.dev 1 points 2 years ago

There’s at least one in almost every paragraph of the introduction. He seems to be satisfied with himself a bit more than what is tasteful, but it’s hard to argue that the product is amazing.

[–] sisyphean@programming.dev 2 points 2 years ago (2 children)

It wasn’t easy, but finally I could reverse-engineer its algorithm:

function ask(question) {
    return "…";
}
[–] sisyphean@programming.dev 2 points 2 years ago

Maybe the next bot I write should be GoodBotBadBotBot

[–] sisyphean@programming.dev 3 points 2 years ago (1 children)

I think the incentives are a bit different here. If we can keep the threadiverse nonprofit, and contribute to the maintenance costs of the servers, it might stay a much friendlier place than Reddit.

[–] sisyphean@programming.dev 4 points 2 years ago (1 children)

We should do an AmA with her!

[–] sisyphean@programming.dev 1 points 2 years ago

Lemmy actually has a really good API. Moderation tools are pretty simple though.

 

TL;DR (by GPT-4 🤖)

The article discusses the concept of building autonomous agents powered by Large Language Models (LLMs), such as AutoGPT, GPT-Engineer, and BabAGI. These agents use LLMs as their core controller, with key components including planning, memory, and tool use. Planning involves breaking down tasks into manageable subgoals and self-reflecting on past actions to improve future steps. Memory refers to the agent's ability to utilize short-term memory for in-context learning and long-term memory for retaining and recalling information. Tool use allows the agent to call external APIs for additional information. The article also discusses various techniques and frameworks for task decomposition and self-reflection, different types of memory, and the use of external tools to extend the agent's capabilities. It concludes with case studies of LLM-empowered agents for scientific discovery.

Notes (by GPT-4 🤖)

LLM Powered Autonomous Agents

  • The article discusses the concept of building agents with Large Language Models (LLMs) as their core controller, with examples such as AutoGPT, GPT-Engineer, and BabAGI. LLMs have the potential to be powerful general problem solvers.

Agent System Overview

  • The LLM functions as the agent’s brain in an LLM-powered autonomous agent system, complemented by several key components:
    • Planning: The agent breaks down large tasks into smaller subgoals and can self-reflect on past actions to improve future steps.
    • Memory: The agent utilizes short-term memory for in-context learning and long-term memory to retain and recall information over extended periods.
    • Tool use: The agent can call external APIs for extra information that is missing from the model weights.

Component One: Planning

  • Task Decomposition: Techniques like Chain of Thought (CoT) and Tree of Thoughts are used to break down complex tasks into simpler steps.
  • Self-Reflection: Frameworks like ReAct and Reflexion allow the agent to refine past action decisions and correct previous mistakes. Chain of Hindsight (CoH) and Algorithm Distillation (AD) are methods that encourage the model to improve on its own outputs.

Component Two: Memory

  • The article discusses the different types of memory in human brains and how they can be mapped to the functions of an LLM. It also discusses Maximum Inner Product Search (MIPS) for fast retrieval from the external memory.

Tool Use

  • The agent can use external tools to extend its capabilities. Examples include MRKL, TALM, Toolformer, ChatGPT Plugins, OpenAI API function calling, and HuggingGPT.
  • API-Bank is a benchmark for evaluating the performance of tool-augmented LLMs.

Case Studies

  • The article presents case studies of LLM-empowered agents for scientific discovery, such as ChemCrow and a system developed by Boiko et al. (2023). These agents can handle autonomous design, planning, and performance of complex scientific experiments.
 
 

👋 Hello everyone, welcome to our very first Weekly Discussion thread!

This week, we're focusing on the applications of AI that you've found particularly noteworthy.

We're not just looking for headline-making AI applications. We're interested in the tools that have made a real difference in your day-to-day routine, or a unique AI feature that you've found useful. Have you discovered a new way to utilize ChatGPT? Perhaps Stable Diffusion or Midjourney has helped you generate an image that you're proud of?

Let's share our knowledge and learn more about the various applications of AI. Looking forward to your contributions.

 

TL;DR (by GPT-4 🤖):

Prompt Engineering, or In-Context Prompting, is a method used to guide Language Models (LLMs) towards desired outcomes without changing the model weights. The article discusses various techniques such as basic prompting, instruction prompting, self-consistency sampling, Chain-of-Thought (CoT) prompting, automatic prompt design, augmented language models, retrieval, programming language, and external APIs. The effectiveness of these techniques can vary significantly among models, necessitating extensive experimentation and heuristic approaches. The article emphasizes the importance of selecting diverse and relevant examples, giving precise instructions, and using external tools to enhance the model's reasoning skills and knowledge base.

Notes (by GPT-4 🤖):

Prompt Engineering: An Overview

  • Introduction
    • Prompt Engineering, also known as In-Context Prompting, is a method to guide the behavior of Language Models (LLMs) towards desired outcomes without updating the model weights.
    • The effectiveness of prompt engineering methods can vary significantly among models, necessitating extensive experimentation and heuristic approaches.
    • This article focuses on prompt engineering for autoregressive language models, excluding Cloze tests, image generation, or multimodality models.
  • Basic Prompting
    • Zero-shot and few-shot learning are the two most basic approaches for prompting the model.
    • Zero-shot learning involves feeding the task text to the model and asking for results.
    • Few-shot learning presents a set of high-quality demonstrations, each consisting of both input and desired output, on the target task.
  • Tips for Example Selection and Ordering
    • Examples should be chosen that are semantically similar to the test example.
    • The selection of examples should be diverse, relevant to the test sample, and in random order to avoid biases.
  • Instruction Prompting
    • Instruction prompting involves giving the model direct instructions, which can be more token-efficient than few-shot learning.
    • Models like InstructGPT are fine-tuned with high-quality tuples of (task instruction, input, ground truth output) to better understand user intention and follow instructions.
  • Self-Consistency Sampling
    • Self-consistency sampling involves sampling multiple outputs and selecting the best one out of these candidates.
    • The criteria for selecting the best candidate can vary from task to task.
  • Chain-of-Thought (CoT) Prompting
    • CoT prompting generates a sequence of short sentences to describe reasoning logics step by step, leading to the final answer.
    • CoT prompting can be either few-shot or zero-shot.
  • Automatic Prompt Design
    • Automatic Prompt Design involves treating prompts as trainable parameters and optimizing them directly on the embedding space via gradient descent.
  • Augmented Language Models
    • Augmented Language Models are models that have been enhanced with reasoning skills and the ability to use external tools.
  • Retrieval
    • Retrieval involves completing tasks that require latest knowledge after the model pretraining time cutoff or internal/private knowledge base.
    • Many methods for Open Domain Question Answering depend on first doing retrieval over a knowledge base and then incorporating the retrieved content as part of the prompt.
  • Programming Language and External APIs
    • Some models generate programming language statements to resolve natural language reasoning problems, offloading the solution step to a runtime such as a Python interpreter.
    • Other models are augmented with text-to-text API calls, guiding the model to generate API call requests and append the returned result to the text sequence.
 

cross-posted from: https://programming.dev/post/222613

Although I prefer the Pro Git book, it's clear that different resources are helpful to different people. For those looking to get an understanding of Git, I've linked to Git for Beginners: Zero to Hero 🐙

The author of "Git for Beginners: Zero to Hero 🐙" posted the following on Reddit:

Hey there folks!

I've rewritten the git tutorial. I've used over the years whenever newbies at work and friends come to me with complex questions but lack the git basics to actually learn.

After discussing my git shortcuts and aliases elsewhere and over DMs it was suggested to me that I share it here.

I hope it helps even a couple of y'all looking to either refresh, jumpstart or get a good grasp of how common git concepts relate to one another !

It goes without saying, that any and all feedback is welcome and appreciated 👍

TL;DR: re-wrote a git tutorial that has helped friends and colleagues better grasp of git https://jdsalaro.com/blog/git-tutorial/

EDIT:

I've been a bit overwhelmed by the support and willingness to provide feedback, so I've enabled hypothes.is on https://jdsalaro.com for /u/NervousQuokka and anyone else wanting chime in. You can now highlight and comment snippets. ⚠️ Please join the feedback@jdsalaro group via this link https://hypothes.is/groups/BrRxenZW/feedback-jdsalaro so any highlights, comments, and notes are visible to me and stay nicely grouped. Using hypothes.is for this is an experiment for me, so let's see how it goes :)

https://old.reddit.com/r/learnprogramming/comments/14i14jv/rewrote_my_zero_to_hero_git_tutorial_and_was_told/

 

cross-posted from: https://programming.dev/post/216322

From the “About” section:

goblin.tools is a collection of small, simple, single-task tools, mostly designed to help neurodivergent people with tasks they find overwhelming or difficult.

Most tools will use AI technologies in the back-end to achieve their goals. Currently this includes OpenAI's models. As the tools and backend improve, the intent is to move to an open source alternative.

The AI models used are general purpose models, and so the accuracy of their output can vary. Nothing returned by any of the tools should be taken as a statement of truth, only guesswork. Please use your own knowledge and experience to judge whether the result you get is valid.

 

From the “About” section:

goblin.tools is a collection of small, simple, single-task tools, mostly designed to help neurodivergent people with tasks they find overwhelming or difficult.

Most tools will use AI technologies in the back-end to achieve their goals. Currently this includes OpenAI's models. As the tools and backend improve, the intent is to move to an open source alternative.

The AI models used are general purpose models, and so the accuracy of their output can vary. Nothing returned by any of the tools should be taken as a statement of truth, only guesswork. Please use your own knowledge and experience to judge whether the result you get is valid.

 
 

Original tweet:

https://twitter.com/goodside/status/1672121754880180224?s=46&t=OEG0fcSTxko2ppiL47BW1Q

Text:

If you put violence, erotica, etc. in your code Copilot just stops working and I happen to need violence, erotica, etc. in Jupyter for red teaming so I always have to make an evil.⁠py to sequester constants for import.

not wild about this. please LLMs i'm trying to help you

(screenshot of evil.py full of nasty things)

 
 
view more: ‹ prev next ›