• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: October 8th, 2023

help-circle
  • What you have heard about is a feature called “Recall”, which is something that has not actually rolled out and will only be coming to PCs with specific neural processing units. Other windows users will not be affected (although of course that will change over time as old devices are replaced with new).

    Is it possible? Yes, of course it’s possible. You could say that about pretty much any operating system - including Linux distros - if the functionality turns out to be popular.

    However, to be 100% clear, this is functionality that the user can disable (either entirely, or on an app-by-app basis). And data is never transacted to the cloud or with Microsoft. What’s on the device does not leave the device. It’s also really not in Microsoft’s own interest at all to try taking on that responsibility… How would they know if you paid for an app/game/song or not, even if they wanted to?

    But back to your question: yes, of course it is possible. This type of technology has already been prototyped in different ways (e.g. Apple have done work about identifying CSAM on the iPhone, although not implemented).

    Yes, Linux gives you a lot more control. If you were to make the switch, I would list a hundred other reasons that are far more compelling than this storm in a teacup.

    That said, there’s absolutely no reason a Linux distro couldn’t also bring the same functionality, if there is consumer appetite for it.

    If you are looking to truly make it “impossible”, you need to air-gap your machine and not connect to the internet anymore.


  • In defence of the author, there is absolutely nothing about the term “AI” that just means “LLM” in an informed context (which is what Wired portends to be). And then the words “machine learning” are literally front and centre in the subtitle.

    I don’t see how anyone could misunderstand this unless it was a deliberate misreading… Or else just not attempting to read it at all…

    (That said, yes, I do hate the fact that product managers now love to talk about how every single feature is “AI” regardless of what it actually is/does)


  • It stems from an old proverb: “there is naught so queer as folk”, essentially meaning “people are strange”. The meaning of “queer” has shifted and narrowed over time to refer to sexuality, but kept its ties to this idiom, resulting in the TV show “queer as folk” and the generic phrase “queer folk”.

    There is nothing especially pretentious or mythical about the word. It may just be your own assumptions/interpretations of it. Far more people have an issue with the word “queer” than they do “folk”. If you don’t like it, don’t use it, but you should also aim to shake the stigma from it, as it’s not what 99.9% of people mean when they use it.









  • So, while this is a “general” question, it seems likely that most people will gravitate towards themes of porn and sexual violence when thinking about it. Let me discuss from that perspective.

    To be clear, I am not an expert, but it is something I have thought a lot about in the context of my field in technology (noting how generative AI can be used to create very graphic images depicting non-consensual activities).

    The short answer: we don’t concretely know for certain. There is an argument that giving people an “outlet” means they can satisfy an urge without endangering themselves in real life. There is also an argument that repeated exposure can dilute/dull the sense of social caution and normalise the fetishised behaviour.

    I am very sympathetic to the former argument where it applies to acts between otherwise informed/consenting individuals. For example, a gay person in a foreign country with anti-gay laws; being able to explore their sexuality through the medium of ‘normal’ gay pornography seems entirely reasonable to me (but might seem disgusting by other cultural standards).

    When it comes to non-consensual acts, I think there is a lot more room for speculation and concern. I would recommend reading this study as an example, which explored dangerous attitudes towards women that were shaped through pornography.

    Some key takeaways:

    1. It’s never as simple as saying “porn caused it”. There are a multitude of factors.
    2. Regardless, there is a seemingly strong anecdotal connection between violent pornography and violent attitudes in real life.
    3. It likely depends heavily on the individual and their own beliefs/perceptions/experiences before this development

    And a final noteworthy line:

    The view that pornography played a role in their clients’ harmful attitudes and/or behaviours was undisputed; what was harder for them to articulate was the strength of the contribution of pornography, given the complexities of the other contributing factors in their clients’ lives.




  • If you are taking an existing publication and just tweaking details (e.g.: character names, locations, dialogue), that’s not fanfic at all; at best that’s an adaptation. If you’re creating a parody (and provide proper citations/attributions to the originating work) it may be fair use. More likely, it’s still considered plagiarism if you can still recognisably see the concepts, structure and inspiration but do not have the author’s permission.

    There is no exact percentage for plagiarism, and that is by design in most countries’ legal systems. It is about concepts and ideas, and whether a “reasonable person” could make the connection.

    Proper fanfic is where you take existing characters and locations, but put them into an entirely new story / scene / context that never happened in the original work, so is considered “original” in that sense.


  • Funding/resourcing is obviously challenging, but I think there are things that can support it:

    1. State it publicly as a proud position. Other platforms are too eager to promote “free speech” at all costs, when in fact they are private companies that can impose whatever rules they want. Stating a firm position doesn’t cost anything at all, whilst also playing a role in attracting a certain kind of user and giving them confidence to report things that are dodgy.

    2. Leverage AI. LLMs and other types of AI tools can be used to detect bots, deepfakes and apply sentiment analysis on written posts. Obviously it’s not perfect and will require human oversight, but it can be an enormous help so staff can see things faster that they otherwise might miss.

    3. Punish offenders. Acknowledging complexities with how to enforce it consistently, there are still things you can do to remove the most egregious bad actors from the platform and signal to others.

    4. Price it in. If you know that you need humans to enforce the rules, then build it into your advertising fees (or other revenue streams) and sell it as a feature (e.g.: companies pay extra so they don’t have to worry about reputational damage when their product appears next to racists etc). The workforce you need isn’t that large compared to the revenue these platforms can potentially generate.

    I don’t mean to suggest it’s easy or failsafe. But it’s what I would do.


  • For anyone who’s willing to spend ~15 mins on this, I’d encourage you to play TechDirt’s simulator game Trust & Safety Tycoon.

    While it’s hardly comprehensive, it’s a fun way of thinking about the balance between needing to remain profitable/solvent whilst also choosing what social values to promote.

    It’s really easy to say “they should do [x]”, but sometimes that’s not what your investors want, or it has a toll in other ways.

    Personally, I want to see more action on disinformation. In my mind, that is the single biggest vulnerability that can be exploited with almost no repurcussions, and the world is facing some important public decisions (e.g. elections). I don’t pretend to know the specific solution, but it’s an area that needs way more investment and recognition than it currently gets.


  • windows does not have any built in way to take screenshots with the mouse cursor

    Whilst this comment isn’t really related to the popup itself, why couldn’t you use the native screenshot capability (e.g. Snipping Tool)? It’s entirely navigable by mouse cursor if you want, and available to every Win10/11 user. I’m not sure what other type of problem / limitation you’re trying to describe here…




  • That’s an appealing ‘conspiracy’ angle, and I understand why it might seem juicy and tantalising to onlookers, but that idea doesn’t hold up to any real scrutiny whatsoever.

    Why would the Board willingly trash their reputation? Why would they drag the former Twitch CEO through the mud and make him look weak and powerless? Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?

    None of that makes any sense whatsoever from a strategic, corporate “planned” perspective. They are all actions of people who are reacting to things in the heat of the moment and are panicking because they don’t know how it will end.