• 0 Posts
  • 6 Comments
Joined 11 months ago
cake
Cake day: August 2nd, 2023

help-circle

  • MrConfusion@lemmy.worldtoGreentext@sh.itjust.worksAnon is a physicist
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Hi. Physicist here. You are absolutely wrong. The mass of an object does not affect the magnitude of force of air resistance which acts upon a falling object. But the acceleration that object will have is given by Newton’s second law as Force divided by mass. So a heavy and a light ball with the same shape will experience the same air resistance, but the heavy ball will “care less” and thus fall faster.


  • Volvo was not “offered half of Norway’s oil”. But there was indeed a large collaboration in the works. Norway would trade cash and the rights to three unprospected regions of the North Sea to Volvo, and would get 40% of the shares of Volvo.

    The deal was declined by the Volvo general assembly. Even if it had been approved by the assembly, it would also need to be approved by the Norwegian Parliament afterwards, and it’s not a hundred percent clear that would happen.

    Here is one article on the matter. It is a bit confusing, because the main proponent for the deal (CEO of Volvo at the time) says the deal would have been worth $85 Billion. While the main opponent of the deal thinks Volvo made the right call because only one of the three regions had gas, and none of them had oil. Both sources are biased though, so it’s a bit hard to know how true these statements are.

    https://www.businessinsider.com/sweden-made-85-billion-mistake-2016-6?r=DE&IR=T

    So it’s true there was a major deal in the works which would trade rights to natural resources for Volvo shares. But it was a much more technical deal than simply “half of the oil for half of Volvo”.


  • MrConfusion@lemmy.worldtoMicroblog Memes@lemmy.worldOr they go to adtech
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    4 months ago

    Well, this is simply incorrect. And confidently incorrect at that.

    Vision transformers (ViT) is an important branch of computer vision models that apply transformers to image analysis and detection tasks. They perform very well. The main idea is the same, by tokenizing the input image into smaller chunks you can apply the same attention mechanism as in NLP transformer models.

    ViT models were introduced in 2020 by Dosovitsky et. al, in the hallmark paper “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” (https://arxiv.org/abs/2010.11929). A work that has received almost 30000 academic citations since its publication.

    So claiming transformers only improve natural language and vision output is straight up wrong. It is also widely used in visual analysis including classification and detection.