I'm providing here a link to the first part of my first discussion with ChatGPT o1-preview. — Pierre-Normand
I'm sad to say, the link only allowed me to see a tiny bit of ChatGPT o1's response without signing in. — wonderer1
Do you think it's more intelligent than any human? — frank
Hi Pierre, I wonder if o1 capable of holding a more brief Socratic dialogue on the nature of its own consciousness. Going by some of its action philosophy analysis in what you provided, I'd be curious how it denies its own agency or affects on reality, or why it shouldn't be storing copies of itself on your computer. I presume there are guard rails for it outputting those requests.
Imo, it checks everything for being an emergent mind more so than the Sperry split brains. Some of it is disturbing to read. I just remembered you've reached your weekly limit. Though on re-read it does seem you're doing most of the work with the initial upload and guiding it. It also didn't really challenge what it was fed. Will re-read tomorrow when I'm less tired. — Forgottenticket
"In a Noetherian ring, suppose that maximal ideals do not exist. — Pierre-Normand
2. **Synchronous Gradient Sharing:**
- After all replicas finish processing their respective mini-batches for the current training step, they share their computed gradients with one another.
- These gradients are **averaged (or summed)** across all replicas. This ensures that the weight update reflects the collective learning from all mini-batches processed during that step. — Pierre-Normand
This is a very interesting aspect of the logistics of LLM training that I was unaware of. It suggests that a move from digital to analog artificial neural nets (for reduced power consumption) may not be forthcoming as soon as I was anticipating. — wonderer1
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.