Proud globohomo
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    6h ago 95%

    This is an actual term apparently in common circulation, though homophobia wasn't even its original intent: https://glaad.org/globohomo-definition-meaning-anti-lgbt-online-hate/

    Emerging in 2016, this multi-purpose, right-wing troll invention combines homophobia and anti-Semitism. Researcher David Futrelle’s well-known misogyny tracking site We Hunted The Mammoth offers this summary: “Ostensibly, ‘globohomo’ is short for ‘global homogenization,’ an alleged vast conspiracy to destroy ‘traditional’ culture and values and replace them with a sort of global (naturally) corporate uniculture. But it’s rarely used in this way, at least not exactly. For those who’ve seized upon the term, ‘globo’ means ‘globalist’ and therefore Jews; while ‘homo’ (the suffix) means, well, ‘homo’ (the slur). (Some, evidently worried that ‘globohomo’ isn’t gay-sounding enough, add ‘gayplex’ to it — ‘globohomogayplex.’).” According to the Online Hate Research and Education Project, white nationalists and other hate movements use “globohomo” to allege the existence of a global plot to promote the so-called ‘‘LGBTQ+ agenda,” a similarly minded conspiracy theory (promoted by certain sectors of the Christian religious right) alleging that LGBTQ people aim to surpass the rights of other groups and “groom youth into identifying as part of the community.”

    20
  • Lemmy's gaining popularity, so I thought new people should see this.
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    11h ago 71%

    Lemmy.world and sh.itjust.works don’t seem to have any noticeable political leanings as far as I can tell.

    ...What?

    I consider myself a raging liberal, at least in the US. A socialist. But lemmy.world is so liberal it makes me feel like a Trumpster.

    I guess I don't feel at risk of getting globally banned like I would for disagreeing with the consensus like on .ml, but claiming .world is neutral is quite a sweeping statement.

    9
  • Zelenskyy said NATO or nuclear and he ain’t wrong.
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    21h ago 100%

    In UN

    Russia's deligate is beyond furious. Most everyone has an awkward look. China is getting very annoyed with their vassal's war, and someone on the floor:

    wringing hands "Well technically its within Ukraine's rights..."

    10
  • The Satanic Temple is taking on the Christian right. It may be effective – it’s definitely fun to watch
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    24h ago 100%

    why bother?

    They need to go the extra mile, to make it as difficult as possible to challenge them legally. Christian hosptials and such aren't going to get invesigated or sued for being Christian hospitals, but the Satanic Temple won't get it easy anywhere.

    20
  • Elon Musk’s fake sites and fake texts impersonating the Harris campaign
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 33%

    I hope not. It would be a shame to dump SpaceX in particular for purely political reasons, when (so far) they've been the best contracter by far.

    As the head of NASA said, they don't work with Elon Musk, they work with SpaceX, and the relationship has been good.

    I do hope they leave X though, and just promote a Fediverse instance or something instead. That's more of Elon's pet project/playground now.

    -1
  • The Satanic Temple is taking on the Christian right. It may be effective – it’s definitely fun to watch
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    It would be absolutely hilarious (and sad) if the abortion-as-a-sacrament effort takes off, and clinics in abortion-free states start becoming official Satanic Temple places of worship left and right, complete with minimal decorations and everything, clinic doctors as priests, things like that.

    They'd probably have to win a court challenge first, but that could totally happen.

    Like, what's the bare minimum for something to legally be considered a sacrament? What about a place of worship? What legal avenues can states even approach without running into a strong constitutional conflict? All these edge cases are fascinating to me, especially with how much of a headache they'll give state legislatures.

    30
  • US ready to invite Ukraine to NATO - Le Monde
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    According to an anonymous European diplomat, if Democratic candidate Kamala Harris wins the US presidential election, it can be assumed that Joe Biden will start working on an invitation to Ukraine during the transition period.

    The implication though... what if she doesn't?

    I guess they can't invite Ukraine in the transition period if Trump would shoot it down? But why can't they just rush the ratification?

    8
  • Leaked U.S. Intelligence Suggests Israel Is Preparing to Strike Iran
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    If Ehud Barak had gone back to the Israeli people with “You have to give them back their houses and stop encircling/blockading their settlements”, he’d have been assassinated by the Israelis.

    Isn't that the nature of a "winner takes all" knife's edge political system, though? If the opposition were in power, they would have done something like this, and Israel would hate it, but they'd have to take it just like they took what they didn't like over the past decades. Maybe they'd lose the next election (and get assassinated), but the deed would already be done.

    ...Or maybe I'm totally wrong.

    1
  • Zelenskyy said NATO or nuclear and he ain’t wrong.
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 80%

    Aplogies for being rude.

    Yeah, Trump's fascination with strongmen is more of a personality quirk than policy, but the attitude of the Republic party has abruptly shifted from "anti Russia/China" to more universally protectionist and isolationist. If you watch Tucker Carlson (for instance), you'll hear a lot of questioning like "why should we have to pay for all this madness overseas?" and accusations its feeding the US military industrial complex... and there's a nugget of truth there. The oldschool Republicans have been steadily losing power, and this is kinda the tipping point.

    If Trump wins, expect to see a lot of noise about withdrawing from NATO, pulling out of large trade agreements, "abruptly" settling disputes, tarrifs. Things like that, basically the exact opposite of the old neoliberal paradigm.

    He also holds vicious grudges, something he did before he even got into politics, so that may color some foreign policy as well. If he's acting strange towards some person in particular on the news, search for "Trump (X) controversy," and something from before 2020 will probably come up.

    3
  • Elon Musk’s fake sites and fake texts impersonating the Harris campaign
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    What's his fate if Harris wins? He's clearly done away with any pretense of being poltically neutral.

    And I know Harris isn't a jerk, but man, I'd be salty AF Fif I were her. He would be right to fear some revenge from Democrats at this point.

    10
  • Former PlayStation boss says games are "seeing a collapse in creativity" as publishers spend more time asking "what's your monetization scheme?"
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    I think there's also a "Netflix effect" where old games are incresingly accessible as an alternative to newer crap, kinda like (from my personal observations) how a lot of young people seem to be really fluent in old movies and TV due to streaming and YT.

    Its going to bite these publishers in the bum.

    34
  • Leaked U.S. Intelligence Suggests Israel Is Preparing to Strike Iran
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    I'm no expert, but I remember Netanyahu’s opposition supporting two state solutions and other much more reasonable approaches than the horrible status quo. But it never quite hit critical mass, right?

    And Netanyahu sure seems like the person who clung to power and just barely stopped that opposition from ever taking root.

    1
  • Zelenskyy said NATO or nuclear and he ain’t wrong.
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 87%

    Are you kidding? Trump hates Zelensky with a burning passion, because he personally wronged him.

    https://en.wikipedia.org/wiki/Trump–Ukraine_scandal

    The Trump–Ukraine scandal was a political scandal that arose primarily from the discovery of U.S. President Donald Trump's attempts to coerce Ukraine into investigating his political rival Joe Biden and thus potentially damage Biden's campaign for the 2020 Democratic Party presidential nomination.

    He's going to screw Ukraine and offer Russia a favorable capitulation the absolute first second he can. And probably offer Russia Zelensky if he can manage it.

    The Republicans are increasingly turning anti-NATO as well.

    Oldschool Republicans lawmakers 100% support Ukraine, maybe even stronger than Democrats do. Some are still hanging around the Senate, but most are gone or retiring soon (like Mitch Mcconnell), and they're already gone from the U.S. House and Trump's cabinet.

    edit: Now that I think about it, Mike Johnson (The US House speaker) did make a suprise decision in support of Ukraine and in defiance of his own party, but his position as speaker is extremely precarious. I don't think that will happen again.

    6
  • Goodbye [System32 Comics]
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    It's unbelievavly time inefficient for... anything.

    And its incredibly engaging. I burnt through so much time shooting the breeze in hopes of actually finding something interesting, notification spam, checking channels... It's why I deleted it from everywhere. And it left a gaping hole in my life, because its the only place some niche communities exist now.

    6
  • Leaked U.S. Intelligence Suggests Israel Is Preparing to Strike Iran
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 77%

    Biden: Please no...

    Netanyahu: Waves eyebrows.

    Ugh, what a toxic relationship. The U.S. bends over backwards for Netanyahu, not Israel, Netanyahu, and he's going to snipe Biden/Harris at the last second anyway. He could literally cost Harris the election.

    And, you know, lead to tons of death and destruction and literal genocide. But thats a secondary concern in his quest to stay in power.

    15
  • Zelenskyy said NATO or nuclear and he ain’t wrong.
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    The engineering for plutonium nuke is not trivial. A U235 one is dead simple, but they probably have Plutonium from reactors, not U235 from centrifuges.

    And yeah, they undoubtedly have Soviet blueprints under a matress somewhere.

    3
  • Zelenskyy said NATO or nuclear and he ain’t wrong.
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    As a former SSR that held nuclear weapons on its territory before 1968, they even oughta be free and clear with respect to the non proliferation treaty.

    Is that true? If the worst comes to pass, I wonder what the UN will say (not that it matters...)

    2
  • Zelenskyy said NATO or nuclear and he ain’t wrong.
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearBR
    brucethemoose
    1d ago 100%

    The support may be dropping away anyway.

    Imagine a right wing US/EU election sweep from Zelensky's point of view. They're going to force Ukraine to capitulate, and in a very lopsided manner that cripples Ukraine forever, hence this could be an actual option/last resort more than a threat.

    10
  • I see a lot of talk of Ollama here, which I personally don't like because: - The quantizations they use tend to be suboptimal - It abstracts away llama.cpp in a way that, frankly, leaves a lot of performance and quality on the table. - It abstracts away things that you should *really* know for hosting LLMs. - I don't like some things about the devs. I won't rant, but I *especially* don't like the hint they're cooking up something commercial. So, here's a quick guide to get away from Ollama. - First step is to pick your OS. Windows is fine, but if setting up something new, linux is best. I favor CachyOS in particular, for its great python performance. If you use Windows, be sure to enable hardware accelerated scheduling and [disable shared memory.](https://support.cognex.com/docs/deep-learning_330/web/EN/deep-learning/Content/deep-learning-Topics/optimization/gpu-disable-shared.htm?TocPath=Optimization%20Guidelines%7CNVIDIA%C2%AE%20GPU%20Guidelines%7C_____6) - Ensure the latest version of CUDA (or ROCm, if using AMD) is installed. Linux is great for this, as many distros package them for you. - Install Python 3.11.x, 3.12.x, or at least whatever your distro supports, and git. If on linux, also install your distro's "build tools" package. Now for *actually* installing the runtime. There are a great number of inference engines supporting different quantizations, forgive the Reddit link but see: https://old.reddit.com/r/LocalLLaMA/comments/1fg3jgr/a_large_table_of_inference_engines_and_supported/ As far as I am concerned, 3 matter to "home" hosters on consumer GPUs: - Exllama (and by extension TabbyAPI), as a very fast, very memory efficient "GPU only" runtime, supports AMD via ROCM and Nvidia via CUDA: https://github.com/theroyallab/tabbyAPI - Aphrodite Engine. While not strictly as vram efficient, its much faster with parallel API calls, reasonably efficient at very short context, and supports just about every quantization under the sun and more exotic models than exllama. AMD/Nvidia only: https://github.com/PygmalionAI/Aphrodite-engine - This fork of kobold.cpp, which supports more fine grained kv cache quantization (we will get to that). It supports CPU offloading and *I think* Apple Metal: https://github.com/Nexesenex/croco.cpp Now, there are also reasons I don't like llama.cpp, but one of the big ones is that sometimes its model implementations have... quality degrading issues, or odd bugs. Hence I would generally recommend TabbyAPI if you have enough vram to avoid offloading to CPU, and can figure out how to set it up. So: - Open a terminal, run `git clone https://github.com/theroyallab/tabbyAPI.git` - `cd tabbyAPI` - Follow this guide for setting up a python venv and installing pytorch and tabbyAPI: https://github.com/theroyallab/tabbyAPI/wiki/01.-Getting-Started#installing This can go wrong, if anyone gets stuck I can help with that. - *Next*, figure out how much VRAM you have. - Figure out how much "context" you want, aka how much text the llm can ingest. If a models has a context length of, say, "8K" that means it can support 8K tokens as input, or less than 8K words. Not all tokenizers are the same, some like Qwen 2.5's can fit nearly a word per token, while others are more in the ballpark of half a work per token or less. - Keep in mind that the actual context length of many models is an outright lie, see: https://github.com/hsiehjackson/RULER - Exllama has a feature called "kv cache quantization" that can dramatically shrink the VRAM the "context" of an LLM takes up. Unlike llama.cpp, it's Q4 cache is basically lossless, and on a model like Command-R, an 80K+ context can take up less than 4GB! Its essential to enable Q4 or Q6 cache to squeeze in as much LLM as you can into your GPU. - With that in mind, you can search huggingface for your desired model. Since we are using tabbyAPI, we want to search for "exl2" quantizations: https://huggingface.co/models?sort=modified&search=exl2 - There are all sorts of finetunes... and a lot of straight-up garbage. But I will post some general recommendations based on total vram: - 4GB: A very small quantization of Qwen 2.5 7B. Or maybe Llama 3B. - 6GB: IMO llama 3.1 8B is best here. There are many finetunes of this depending on what you want (horny chat, tool usage, math, whatever). For coding, I would recommend Qwen 7B coder instead: https://huggingface.co/models?sort=trending&search=qwen+7b+exl2 - 8GB-12GB Qwen 2.5 14B is king! Unlike it's 7B counterpart, I find the 14B version of the model incredible for its size, and it will squeeze into this vram pool (albeit with very short context/tight quantization for the 8GB cards). I would recommend trying Arcee's new distillation in particular: https://huggingface.co/bartowski/SuperNova-Medius-exl2 - 16GB: Mistral 22B, Mistral Coder 22B, and *very tight* quantizations of Qwen 2.5 34B are possible. Honorable mention goes to InternLM 2.5 20B, which is alright even at 128K context. - 20GB-24GB: [Command-R 2024 35B](https://huggingface.co/async0x42/c4ai-command-r-08-2024-exl2_4bpw) is excellent for "in context" work, like asking questions about long documents, continuing long stories, anything involving working "with" the text you feed to an LLM rather than pulling from it's internal knowledge pool. It's also quite goot at longer contexts, out to 64K-80K more-or-less, all of which fits in 24GB. Otherwise, stick to Qwen 2.5 34B, which still has a very respectable 32K native context, and a rather mediocre 64K "extended" context via YaRN: https://huggingface.co/DrNicefellow/Qwen2.5-32B-Instruct-4.25bpw-exl2 - 32GB, same as 24GB, just with a higher bpw quantization. But this is also the threshold were lower bpw quantizations of Qwen 2.5 72B (at short context) start to make sense. - 48GB: Llama 3.1 70B (for longer context) or Qwen 2.5 72B (for 32K context or less) Again, browse huggingface and pick an exl2 quantization that will cleanly fill your vram pool + the amount of context you want to specify in TabbyAPI. Many quantizers such as bartowski will list how much space they take up, but you can also just look at the available filesize. - Now... you have to download the model. Bartowski has instructions [here](https://huggingface.co/bartowski/SuperNova-Medius-exl2#download-instructions), but I prefer to use this nifty standalone tool instead: https://github.com/bodaay/HuggingFaceModelDownloader - Put it in your TabbyAPI models folder, and follow the documentation on the wiki. - There are a lot of options. Some to keep in mind are chunk_size (higher than 2048 will process long contexts faster but take up lots of vram, less will save a little vram), cache_mode (use Q4 for long context, Q6/Q8 for short context if you have room), max_seq_len (this is your context length), tensor_parallel (for faster inference with 2 identical GPUs), and max_batch_size (parallel processing if you have multiple user hitting the tabbyAPI server, but more vram usage) - Now... pick your frontend. The tabbyAPI wiki has a good compliation of community projects, but Open Web UI is very popular right now: https://github.com/open-webui/open-webui I personally use exui: https://github.com/turboderp/exui - And *be careful with your sampling settings when using LLMs*. Different models behave differently, but one of the most common mistakes people make is using "old" sampling parameters for new models. In general, keep temperature very low (<0.1, or even zero) and rep penalty low (1.01?) unless you need long, creative responses. If available in your UI, enable DRY sampling to tamp down repition without "dumbing down" the model with too much temperature or repitition penalty. Always use a MinP of 0.05 or higher and disable other samplers. This is especially important for Chinese models like Qwen, as MinP cuts out "wrong language" answers from the response. - Now, once this is all setup and running, I'd recommend throttling your GPU, as it simply doesn't need its full core speed to maximize its inference speed while generating. For my 3090, I use something like `sudo nvidia-smi -pl 290`, which throttles it down from 420W to 290W. Sorry for the wall of text! I can keep going, discussing kobold.cpp/llama.cpp, Aphrodite, exotic quantization and other niches like that if anyone is interested.

    316
    83
    qwenlm.github.io

    cross-posted from: https://lemmy.world/post/19925986 > https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e > > Qwen 2.5 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B just came out, with some variants in some sizes just for math or coding, and base models too. > > All Apache licensed, all 128K context, and the 128K seems legit (unlike Mistral). > > And it's pretty sick, with a tokenizer that's more efficient than Mistral's or Cohere's and benchmark scores even better than llama 3.1 or mistral in similar sizes, especially with newer metrics like MMLU-Pro and GPQA. > > I am running 34B locally, and it seems super smart! > > As long as the benchmarks aren't straight up lies/trained, this is massive, and just made a whole bunch of models obsolete. > > Get usable quants here: > > GGUF: https://huggingface.co/bartowski?search_models=qwen2.5 > > EXL2: https://huggingface.co/models?sort=modified&search=exl2+qwen2.5

    17
    0
    qwenlm.github.io

    https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e Qwen 2.5 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B just came out, with some variants in some sizes just for math or coding, and base models too. All Apache licensed, all 128K context, and the 128K seems legit (unlike Mistral). And it's pretty sick, with a tokenizer that's more efficient than Mistral's or Cohere's and benchmark scores even better than llama 3.1 or mistral in similar sizes, especially with newer metrics like MMLU-Pro and GPQA. I am running 34B locally, and it seems super smart! As long as the benchmarks aren't straight up lies/trained, this is massive, and just made a whole bunch of models obsolete. Get usable quants here: GGUF: https://huggingface.co/bartowski?search_models=qwen2.5 EXL2: https://huggingface.co/models?sort=modified&search=exl2+qwen2.5

    14
    0

    Obviously there's not a lot of love for OpenAI and other corporate API generative AI here, but how does the community feel about self hosted models? Especially stuff like the Linux Foundation's Open Model Initiative? I feel like a lot of people just don't know there are Apache/CC-BY-NC licensed "AI" they can run on sane desktops, right now, that are *incredible*. I'm thinking of the most recent Command-R, specifically. I can run it on one GPU, and it blows expensive API models away, and it's *mine* to use. And there are efforts to kill the power cost of inference and training with stuff like matrix-multiplication free models, open source and legally licensed datasets, cheap training... and OpenAI and such want to shut down *all of this* because it breaks their monopoly, where they can just outspend everyone scaling , stealiing data and destroying the planet. And it's actually a threat to them. Again, I feel like corporate social media vs fediverse is a good anology, where one is kinda destroying the planet and the other, while still niche, problematic and a WIP, kills a lot of the downsides.

    65
    62
    huggingface.co

    cross-posted from: https://lemmy.world/post/19242887 > I can run the full 131K context with a 3.75bpw quantization, and still a very long one at 4bpw. And it should *barely* be fine-tunable in unsloth as well. > > > It's pretty much perfect! Unlike the last iteration, they're using very aggressive GQA, which makes the context small, and it feels really smart at long context stuff like storytelling, RAG, document analysis and things like that (whereas Gemma 27B and Mistral Code 22B are probably better suited to short chats/code).

    10
    0
    huggingface.co

    I can run full 131K context with a 3.75bpw quantization, and still a very long one at 4bpw. And it should *barely* be fine-tunable in unsloth as well. It's pretty much perfect! Unlike the last iteration, they're using very aggressive GQA, which makes the context small, and it feels really smart at long context stuff like storytelling, RAG, document analysis and things like that (whereas Gemma 27B and Mistral Code 22B are probably better suited to short chats/code).

    17
    0
    https://www.axios.com/2024/08/14/gaza-ceasefire-hostage-deal-talks-us-pressure

    > Senior U.S., Qatari, Egyptian and Israeli officials will meet on Thursday under intense pressure to reach a breakthrough on the Gaza hostage and ceasefire deal. > he heads of the Israeli security and intelligence services told Netanyahu at the meeting on Wednesday that time is running out to reach a deal and emphasized that delay and insistence on certain positions in the negotiations could cost the lives of hostages, a senior Israeli official said.

    29
    6
    https://videocardz.com/newz/alleged-amd-strix-halo-appears-in-the-very-first-benchmark-features-5-36-ghz-clock

    HP is apparently testing these upcoming APUs in a single, 8-core configuration. The Geekbench 5 ST score is around 2100, which is crazy... but not what I really care about. Strix Halo will have a 256 -bit memory bus and 40 CUs, which will make it a monster for local LLM inference. I am praying AMD sells these things in embedded motherboards with a 128GB+ memory config. Especially in an 8-core config, as I'd rather not burn money and TDP on a 16 core version.

    36
    2
    https://www.axios.com/2024/06/17/paramount-shari-redstone-skydance

    cross-posted from: https://lemmy.world/post/16629163 > Supposedly for petty personal reasons: > > > The woman who controls the company, Shari Redstone, snatched defeat from the jaws of victory last week as she scuttled a planned merger with David Ellison's Skydance Media. > > > Redstone had spent six months negotiating a complicated deal that would have given control of Paramount to Ellison and RedBird Capital, only to call it off as it neared the finish line. > > > The chief reason for her decision: Her reluctance to let go of a family heirloom she fought very hard to get. I cross posted this from c/Avatar, but I am a Trekkie too and don't like this one bit. FYI previous articles seemed to imply the Sony deal is dead.

    12
    6
    https://www.axios.com/2024/06/17/paramount-shari-redstone-skydance

    Supposedly for petty personal reasons: > The woman who controls the company, Shari Redstone, snatched defeat from the jaws of victory last week as she scuttled a planned merger with David Ellison's Skydance Media. > Redstone had spent six months negotiating a complicated deal that would have given control of Paramount to Ellison and RedBird Capital, only to call it off as it neared the finish line. > The chief reason for her decision: Her reluctance to let go of a family heirloom she fought very hard to get. The fandom doesn't want to talk about it, but the Avatar franchise is in trouble.

    16
    5
    deadline.com

    Avatar Studios seems to be part of Paramount Media, aka the "pay television channels" that I assume Sony is not interested in: https://en.wikipedia.org/wiki/Paramount_Global And in light of this article: https://deadline.com/2024/05/paramount-sale-hollywood-studio-takeover-history-lessons-1235910245/ That doesn't look good for Avatar Studios. If they are left behind in a Sony sale, it seems the probability of them getting shut down (or just going down with whatever is left of Paramount) is very high.

    7
    0
    https://www.axios.com/2024/05/06/paramount-sale-reopens-sony-apollo

    The article is a very fast read because it's Axios, but in a nutshell, either: - Skydance gets Paramount intact, but possibly with financial trouble and selling some IP. - Sony gets Paramount, but restructures the company and also possibly sells some parts. - Nothing happens... and Paramount continues its downward spiral, probably accelerated by a failed sale. The can of worms opened today, as now Paramount is officially open to a buyout from sony. I don't like this at all. Avatar is a high budget IP, animesque fantasy, and not historically, proveably profitable like Star Trek/Spongebob. Avatar Studios is a real candidate to be chopped off.

    11
    3

    As the title says. This includes any visual media, including all 7 Books and other stuff. What kind screen do you watch it on? What sound setup? What source? Screen poll: https://strawpoll.com/e6Z28M9aqnN Source poll: https://strawpoll.com/Q0ZpRmzaVnM I'm asking this because: A: I'm curious how this fandom generally consumes the shows B: I theorize this may have an impact on the experience. Avatar is an audiovisual feast, and I find I get caught up in the art/music more than many viewers seem to. LoK in particular is like a totally different show with high-bitrate HD vs. a bad stream.

    8
    4