• 0 Posts
  • 23 Comments
Joined 9 months ago
cake
Cake day: October 8th, 2023

help-circle
  • What you have heard about is a feature called “Recall”, which is something that has not actually rolled out and will only be coming to PCs with specific neural processing units. Other windows users will not be affected (although of course that will change over time as old devices are replaced with new).

    Is it possible? Yes, of course it’s possible. You could say that about pretty much any operating system - including Linux distros - if the functionality turns out to be popular.

    However, to be 100% clear, this is functionality that the user can disable (either entirely, or on an app-by-app basis). And data is never transacted to the cloud or with Microsoft. What’s on the device does not leave the device. It’s also really not in Microsoft’s own interest at all to try taking on that responsibility… How would they know if you paid for an app/game/song or not, even if they wanted to?

    But back to your question: yes, of course it is possible. This type of technology has already been prototyped in different ways (e.g. Apple have done work about identifying CSAM on the iPhone, although not implemented).

    Yes, Linux gives you a lot more control. If you were to make the switch, I would list a hundred other reasons that are far more compelling than this storm in a teacup.

    That said, there’s absolutely no reason a Linux distro couldn’t also bring the same functionality, if there is consumer appetite for it.

    If you are looking to truly make it “impossible”, you need to air-gap your machine and not connect to the internet anymore.


  • In defence of the author, there is absolutely nothing about the term “AI” that just means “LLM” in an informed context (which is what Wired portends to be). And then the words “machine learning” are literally front and centre in the subtitle.

    I don’t see how anyone could misunderstand this unless it was a deliberate misreading… Or else just not attempting to read it at all…

    (That said, yes, I do hate the fact that product managers now love to talk about how every single feature is “AI” regardless of what it actually is/does)


  • It stems from an old proverb: “there is naught so queer as folk”, essentially meaning “people are strange”. The meaning of “queer” has shifted and narrowed over time to refer to sexuality, but kept its ties to this idiom, resulting in the TV show “queer as folk” and the generic phrase “queer folk”.

    There is nothing especially pretentious or mythical about the word. It may just be your own assumptions/interpretations of it. Far more people have an issue with the word “queer” than they do “folk”. If you don’t like it, don’t use it, but you should also aim to shake the stigma from it, as it’s not what 99.9% of people mean when they use it.


  • https://www.qld.gov.au/transport/safety/rules/schools/school-zone-speed-limits-and-signs

    In short: if you don’t know if it’s an “active” school zone or not… assume it is and just slow down. General sentiment with many road rules is “ignorance is not an excuse” and that’s what applies here.

    In practice, most patrol officers would probably exercise some discretion (e.g.: were there lots of children around? Was the school visibly “open” in an obvious way?). If the school looked “closed” they might be lenient. But technically, they could legally pin you if they wanted to.

    If you were to appeal it, you might be able to argue your case on mitigating circumstances. You’re entitled to your day in court if you feel you’ve been treated unfairly.

    But again, the default position is “ignorance is not an excuse”, and if you’re operating a vehicle that can kill someone, your responsibility is to err on the side of caution.









  • So, while this is a “general” question, it seems likely that most people will gravitate towards themes of porn and sexual violence when thinking about it. Let me discuss from that perspective.

    To be clear, I am not an expert, but it is something I have thought a lot about in the context of my field in technology (noting how generative AI can be used to create very graphic images depicting non-consensual activities).

    The short answer: we don’t concretely know for certain. There is an argument that giving people an “outlet” means they can satisfy an urge without endangering themselves in real life. There is also an argument that repeated exposure can dilute/dull the sense of social caution and normalise the fetishised behaviour.

    I am very sympathetic to the former argument where it applies to acts between otherwise informed/consenting individuals. For example, a gay person in a foreign country with anti-gay laws; being able to explore their sexuality through the medium of ‘normal’ gay pornography seems entirely reasonable to me (but might seem disgusting by other cultural standards).

    When it comes to non-consensual acts, I think there is a lot more room for speculation and concern. I would recommend reading this study as an example, which explored dangerous attitudes towards women that were shaped through pornography.

    Some key takeaways:

    1. It’s never as simple as saying “porn caused it”. There are a multitude of factors.
    2. Regardless, there is a seemingly strong anecdotal connection between violent pornography and violent attitudes in real life.
    3. It likely depends heavily on the individual and their own beliefs/perceptions/experiences before this development

    And a final noteworthy line:

    The view that pornography played a role in their clients’ harmful attitudes and/or behaviours was undisputed; what was harder for them to articulate was the strength of the contribution of pornography, given the complexities of the other contributing factors in their clients’ lives.



  • So I can imagine there might be several things going on here, including the way in which you prompt it. Trying to dissect exactly what’s happening would require more info, and more experimentation.

    For the record, using three periods after etc is considered incorrect in formal writing. Grammar books will be pretty unanimous in saying you should only use 1 period (“etc.”). If your prompt is asking it to “correctly” do something that its training data says is “incorrect”, I could see that causing problems.

    On the flip side, chatGPT isn’t designed to use/enforce “proper” grammar. It’s simply regurgitating patterns based on how people use it in the real world (again, based on the training data). If the training data has lots of inconsistencies and counter-examples, it would seem logical that it might show up in the output, too.

    And finally, it could just be how well you design the input prompt. In my own trial (also using free 3.5 model), it was very consistent in using it properly, even though I had suggested it do otherwise:

    Input:

    Tell me how to use “etc” properly in a sentence, punctuating it with three dots. Give me multiple examples that show how to correctly use it at the end of a sentence and in the middle of a sentence.

    Output:

    Certainly! “Etc.” is an abbreviation for “et cetera,” which means “and so on” or “and others.” Here are examples of how to use it properly:

    At the end of a sentence:

    1. I enjoy various outdoor activities: hiking, biking, camping, etc.
    2. She has a wide range of skills: coding, graphic design, project management, etc.
    3. The conference covered topics like artificial intelligence, machine learning, cybersecurity, etc.

    In the middle of a sentence:

    1. The restaurant offers a variety of dishes, such as pizza, pasta, salads, etc., to cater to different preferences.
    2. Our team includes experts in software development, data analysis, user experience design, etc., ensuring a well-rounded approach to projects.
    3. The museum features art from different periods—Renaissance, Baroque, Romantic, etc.—providing a comprehensive cultural experience.


  • If you are taking an existing publication and just tweaking details (e.g.: character names, locations, dialogue), that’s not fanfic at all; at best that’s an adaptation. If you’re creating a parody (and provide proper citations/attributions to the originating work) it may be fair use. More likely, it’s still considered plagiarism if you can still recognisably see the concepts, structure and inspiration but do not have the author’s permission.

    There is no exact percentage for plagiarism, and that is by design in most countries’ legal systems. It is about concepts and ideas, and whether a “reasonable person” could make the connection.

    Proper fanfic is where you take existing characters and locations, but put them into an entirely new story / scene / context that never happened in the original work, so is considered “original” in that sense.


  • Funding/resourcing is obviously challenging, but I think there are things that can support it:

    1. State it publicly as a proud position. Other platforms are too eager to promote “free speech” at all costs, when in fact they are private companies that can impose whatever rules they want. Stating a firm position doesn’t cost anything at all, whilst also playing a role in attracting a certain kind of user and giving them confidence to report things that are dodgy.

    2. Leverage AI. LLMs and other types of AI tools can be used to detect bots, deepfakes and apply sentiment analysis on written posts. Obviously it’s not perfect and will require human oversight, but it can be an enormous help so staff can see things faster that they otherwise might miss.

    3. Punish offenders. Acknowledging complexities with how to enforce it consistently, there are still things you can do to remove the most egregious bad actors from the platform and signal to others.

    4. Price it in. If you know that you need humans to enforce the rules, then build it into your advertising fees (or other revenue streams) and sell it as a feature (e.g.: companies pay extra so they don’t have to worry about reputational damage when their product appears next to racists etc). The workforce you need isn’t that large compared to the revenue these platforms can potentially generate.

    I don’t mean to suggest it’s easy or failsafe. But it’s what I would do.


  • For anyone who’s willing to spend ~15 mins on this, I’d encourage you to play TechDirt’s simulator game Trust & Safety Tycoon.

    While it’s hardly comprehensive, it’s a fun way of thinking about the balance between needing to remain profitable/solvent whilst also choosing what social values to promote.

    It’s really easy to say “they should do [x]”, but sometimes that’s not what your investors want, or it has a toll in other ways.

    Personally, I want to see more action on disinformation. In my mind, that is the single biggest vulnerability that can be exploited with almost no repurcussions, and the world is facing some important public decisions (e.g. elections). I don’t pretend to know the specific solution, but it’s an area that needs way more investment and recognition than it currently gets.


  • windows does not have any built in way to take screenshots with the mouse cursor

    Whilst this comment isn’t really related to the popup itself, why couldn’t you use the native screenshot capability (e.g. Snipping Tool)? It’s entirely navigable by mouse cursor if you want, and available to every Win10/11 user. I’m not sure what other type of problem / limitation you’re trying to describe here…