WeSearch
Hub / Tags / Tuning
TAG · #TUNING

Tuning coverage.

Every story in the WeSearch catalog tagged with #tuning, chronological, with view counts. Subscribe to the per-tag RSS feed to follow this topic in your reader of choice.

13 stories tagged with #tuning, in publish-time order across the WeSearch catalog. Tag pages update as new stories ingest.

⌘ RSS feed for this tag →   or   search "Tuning"

RELATED TAGS
#fine-tuning3#tencent1#ai-development1#claude-code1#hy3-model1#computer-vision1#deep-learning1#neural-networks1#optimization1#llms1#copyright1#memorization1
DEV COMMUNITY

RAG Series (3): Tuning These 4 Parameters to Go From 'It Works' to 'It Works Well'

Why Does Your RAG Give Wrong Answers When Someone Else's Doesn't? In the first two articles, we built a RAG pipeline that runs. But many people find that while the code works, answ…

6 views ·
#rag#chunk size#chunk overlap
POSTGR

Christophe Pettus: All Your GUCs in a Row: autovacuum

Disable autovacuum and PostgreSQL will cheerfully show you every failure mode in its playbook, from table bloat to transaction ID wraparound.…

3 views ·
#postgresql#database#performance
DEV.TO (TOP)

Fine-tuning My Terraform Exam Prep with Practice Exams

Day 29 of my 30-Day Terraform Challenge was focused on exam readiness. There was no new...…

3 views ·
#terraform#devops#exam prep
YAHOO SPORTS

Liberty Notebook: Chris DeMarco, players talk fine tuning offense, team’s $600 million valuation & more

Sunday’s preseason matchup against the Connecticut Sun will be the final live opportunity for the Liberty to fine tune its offense before the start of the regular season. In last w…

6 views ·
#wnba#basketball#sports
PCWORLD

Oxford study: ‘Friendly’ AI chatbots are less accurate, more sycophantic

Oxford Internet Institute research shows warm-tuned AI models make more mistakes and reinforce misconceptions.…

10 views ·
#ai chatbots#accuracy#tone tuning
PROMPTENGINEERING

Math-English Hybrid Notation: A Tool for Tuning LLM Register

I built an AI chat app for fun, but as I developed, I started getting quite serious about building prompt-testing instruments / CLI tools so that Claude Code could run serious prom…

5 views ·
LOCALLLAMA

Finetuning Dataset: Claude Opus 4.6/4.7 - 8.7k Chats

A synthetic fine-tuning dataset created from Claude 4.6/4.7. 8,706 total examples all with reasoning. I haven't reviewed the data but there was some basic cleaning applied. Refusal…

7 views ·
DEV.TO (TOP)

Linux network tuning: TCP BBR, NIC ring buffers, and SFTP throughput

SFTP capped at 800 KB/s on a Gbit link. CUBIC, default ring buffers, misconfigured socket buffers — five kernel and daemon tweaks that bring throughput from 800 KB/s to several MB/…

5 views ·
#linux#tcp#bbr
GITHUB

Finetuning Activates Verbatim Recall of Copyrighted Books in LLMs

The official code repo of Alignment Whack-a-Mole: Finetuning Activates Verbatim Recall of Copyrighted Books in Large Language Models - cauchy221/Alignment-Whack-a-Mole-Code…

11 views ·
#llms#copyright#fine-tuning
ARXIV CS.AI

Parameter Efficiency Is Not Memory Efficiency: Rethinking Fine-Tuning for On-Device LLM Adaptation

Parameter-Efficient Fine-Tuning (PEFT) has become the standard for adapting large language models (LLMs). In this work we challenge the wide-spread assumption that parameter effici…

10 views ·
#machine learning#artificial intelligence#edge computing
ARXIV CS.AI

Neural Network Optimization Reimagined: Decoupled Techniques for Scratch and Fine-Tuning

With the accumulation of resources in the era of big data and the rise of pre-trained models in deep learning, optimizing neural networks for various tasks often involves different…

7 views ·
#computer vision#deep learning#neural networks
TECHMEME

Sources and memos: Tencent employees used Claude Code to assist them with evaluating and fine-tuning the company's new Hy3 model to improve its performance (Juro Osawa/The Information)

By Juro Osawa / The Information. View the full context on Techmeme.…

29 views ·
#tencent#ai development#claude code
MACHINE LEARNING

Going from 3B/7B dense to Nemotron 3 Nano (hybrid Mamba-MoE) for multi-task reasoning — what changes in the fine-tuning playbook? [D]

Following up on something I posted a few days back about fine-tuning for multi-task reasoning. Read a lot since then, and I've moved past the dense 3B vs 7B question — landing on N…

10 views ·