How is a spectrum supposed to not have a total ordering? To me saying sth is a spectrum always invokes an image of being able to map to/represent the property as an interval (unbounded or bounded) which should always give it a total ordering right?
How is a spectrum supposed to not have a total ordering? To me saying sth is a spectrum always invokes an image of being able to map to/represent the property as an interval (unbounded or bounded) which should always give it a total ordering right?
Thanks for the detailed explanation on the sysreq keys & when & how to use them for unlocking a frozen system :D. Also for the systemctl
bit because i wasnt even sure what to do if i had gotten to a console lol.
It’s fixed in 6.8.10 and 6.9 if you have the ability to upgrade to those.
Honestly idk how id even begin to do that lol, and id also maybe rather not start my first week of linux use by immediately trying to change the kernel version on my own XD (either down or up). I did hear about an issue with rdr2 and kernel 6.8.9 from a reddit post which i found through someone writing about problems with the game on its protondb page. But i thought i was fine as my game worked normally until i encountered the crash & because the reddit and protondb post say its solved by enabling rebar which (iirc) i already have.
However idk if that reddit posts issue is the same/related to the one you linked. Since the rest of the game and my system seem to be mostly fine i think ill either just not play the game or specifically avoid the cutscene when i do (its in an optional quest luckily). And then ill maybe return to it after the updated kernel arrives on fedora to see if it solves the crash or not.
Thanks for the link! I managed to set up sysrq with it, which might have saved me from reinstalling steam when the crash occurred the second time (see the update in my post).
I feel like this isn’t quite fair to math, most of these can apply to school math (when taught in a very bad way) but not even always there imo.
Its true that math notation generally doesn’t give things very descriptive names, but most of the time, depending on where you are learning and what you are learning, symbols for variables/functions do hint at what the object is supposed to be
E.g.: When working in linear algebra capital letters (especially A
, B
, C
, D
as well as M
) are generally Matrices, v
, w
, u
are usually vectors and V
, W
are vector spaces. Along with conventions that are largely independent of the specific math you are doing, like n
, m
, k
usually being integers, i
or j
being indices, f
and g
being functions and x
, y
, z
being unknowns.
Also math statements should be given comments too. But usually this function is served by the text around the equations or the commentary given along side them, so its not a direct part of the symbolic writing itself (unlike comments being a direct part of source code). And when a long symbolic expression isn’t broken up or given much commentary that is usually an implicit sign that it should be easy/quick for the reader to understand/derive based on previously learned material.
Finally there’s also the Problem with having to manipulate the symbols. In Code you just write it and then the computer has to deal with it (and it doesn’t care how verbose you made a variable name). But in math you are generally expected to work with your symbolic expressions and manipulate them. And its very cumbersome to keep having to rewrite multi-letter names every time you manipulate an expression. Additionally math is still generally worked on in paper first, and then transferred into a digital/printed format second, so you can’t just copy + paste or rely on auto completion to move long variable names around, like you might when coding.
Where is this from?
I love seeing conspiracy/crank types do anything with math.
I’m still curious as to whether there’s an easier way to do this than simply updating the global position, by script, to match the pose’s.
I’m not sure if this is what you want but maybe you could use the BoneAttachment3D Node?
Wow thanks for the quick and detailed answer :D. Ive used a variation of the autoload script you provided and it works great.
on my machine it corresponds to the window order in the taskbar.
I tried that at first too but its not very reliable on my machine and it seem they can be in any order on my taskbar.
(I actually went back and checked now that i can easily tell which one it is and it appears like it is often, but not always, the one last in my taskbar (out of 3 open instances). ~30%-50% of the time it can also be the second to last one and i recall it even being the first one in the taskbar at least once)
For anyone who wants more, there is [email protected] and [email protected]
I’m also not quite sure of how it works yet but at least the first part is correct i think. The full link worked for you because its to the instance your account is on. When i use that link (on the desktop website) i get redirected to that site but i don’t have an account there so i cant interact with it on this account. Similarly: if I link https://lemmy.blahaj.zone/c/[email protected] it will work for me without problems but you should see a website where you aren’t logged in (at least using the website, mobile apps might handle it differently i think).
(Although i have no idea why the exclamation mark link didn’t work for you, it did work for me at least. Maybe its the app you are using? I remember that for example some old jerboa version had problems with the exclamation mark links where it would just crash when you tried to use them.)
Ok so it seems like they don’t commute? I asked the question in part because i wanted to do something like:
const base_transform : Transform3D = <some transform>
func get_base_transform(node : Node3D) -> Transform3D:
return node.transform * base_transform
func set_base_transform(node : Node3D, transform : Transform3D) -> void:
node.transform = base_transform.affine_inverse() * node.transform
and i wanted to be sure that if i do set_base_transform(some_node, some_transform)
i’d be guaranteed to get that get_base_transform(some_node) == some_transform
afterwards.
But when i tried it the above code did not work out, at least i didnt get the result i expected. But when i flipped it so that set_base_transform
did node.transform = node.transform * base_transform.affine_inverse()
instead it did work out.
Its still not hard proof though, maybe something else was messed up the first time, or it only looks like it works now and i’ll discover the transform still isn’t what i wanted it to be. Or they do commute but only under some constriction like no scale on any axis or something and i just happened to fulfill it with all the ones i used in my test.
So it would still be good to know for sure whether/when Transform3D
’s commute.
EDIT: I accidentally wrote the first line wrong, it said that they do commute. When actually the experience i had with it working only after both functions did their multiplications in a compatible order should indicate that they don’t commute.
I can kind of see what you mean. Maybe this more “natural”/less staged picture is less fake looking? (source)
So for me what you want sounds either like magic or like nonsense.
maybe XD.
Ray casting is part of physics processing and therefore need a physics body. Using physics body actually should reduce performance impact since you can reduce the geometry of those making calculation less extensive as when you use the full Mesh geometry with al the details.
Yeah i know, but my problem (was) that almost all meshes visible to the player would need a collision to properly place decals on. And additionally, for the decals to look right, they need to be placed close to the mesh, so i need the collision shape to match the mesh very closely. Which doesn’t allow me to simplify the collision a whole lot.
So effectively i have to duplicate most meshes in the scene as collision objects while barely or not being able to simplify the collision compared to the mesh. Also most of those collision objects would have no use besides being intersected by a handful of rays per second to place decals since. While most meshes that are visible to the player need bullet holes on them when hit. Most also don’t need to interact with the bullet/shooting in any way besides that.
My fear was that Areas or StaticBodies cant be optimized well for something like this where i have large quantities of them with complex shapes that are rarely used. Ideally i would still have a way to directly intersect the visual meshes but i can see how that might be performance intensive to do even if built into the engine so it makes sense it doesn’t exist.
So for me just creating an auto generated physics body for ray pickability or using the actual mesh geometry sounds like the same thing. But maybe I’m missing something here.
You aren’t really missing anything. I’m attempting that solution now since it seems like the only feasible one and i’m just crossing my fingers and hoping that my fears about Godot not being able to handle that many areas without performance issues was unfounded. (Although in the current state of the scenes i have i suspect it is going to be fine either way. I’m just worried it might not scale well if i eventually use larger quantities of more complex meshes)
Well yeah physically its sort of right, but for one (though im not sure if this is just a difference of opinion or if its not clear from the picture) imo the streched hole does seem abnormally large. But also if i use any decal more complicated than just this black circle it would seem off. Since the bullet hole texture would be modeled off a “clean” 90 degree impact bullet hole it would look off if streched like that.
Here a simple example that makes it a bit more clear where i put a white x inside the bullet hole.
Decals that make sense for (near) 90 degree impacts would look bad (or at least very different) when stretched like this.
Oh i guess this is exactly what i was asking for XD. Although im probably not going to try it since always needing to place the relevant meshes in the same places relative to the collider does seem like a pretty strict requirement like you said.
You can use the mesh’s AABB to approximate the size so that it always hits the mesh […] But I think the better approach is to fiddle with the decals in a way that the end result is satisfying.
I’ve been trying this approach now for a while (at least if i understood you correctly). Though im approximating the size of the decal by finding the location where the raycast for the bullet exits the collider instead of using a bounding box. Currently struggling a bit with sizing and positioning the decal correctly for it to show up properly😅. But im also starting to think this approach isn’t going to be satisfying when i do get it to work.
The problem is that i am not sure along which direction to project the decal. If the decal projects along the path of the bullet (which would make the most physical sense i think) it can end up incredibly stretched when it encounters the mesh at a very shallow angle. Eg:
So i’m back to the original problem of not being able to determine where/which face of the “visual” mesh the bullet would hit and what its normal is, at least not without making a lot of unnecessary colliders that match it more closely. And if i use the one normal i do have access to (the one from the collider) i cannot guarantee that it is similar enough to the normal of the mesh face the decal will be projected onto to not be stretched anyways.
If you really want to use different decals for different materials you can put them on different layers and spawn multiple decals on those layers.
Ooh, yeah that might be an option, ill keep it in mind when i get there, thanks :D.
I might be wrong but decals dont really care about collisions so you can just spawn a single decal regardless what it hits
Ive already been using a single decal to begin with and i’m having problems that being able to get ray intersections with meshes could solve. The decals need to be close to the mesh they are supposed to be projected on to look right. Otherwise they seem to fade with distance.
For example here:
The bullet is colliding with a triangle shaped collider that goes over these small stairs since that is more convenient and simpler for collisions with the player. But because the collision is simplified compared to the stairs actual mesh the decal can get placed far away from the mesh and not show up or only show up in some places.
If i can only get ray intersections with colliders i’d have to create a much more complex collider that matches the stair mesh much closer to be able to place the decals closer to the mesh. But creating that many colliders that have no other uses besides this one seems like a waste. If i could get ray intersections with meshes directly i could just use that to determine where the decal should be located.
Yeah this, although i should maybe clarify that the “bullet” is not a object/scene that has a position, i just perform one raycast to determine where it lands.
Also its not just about areas that cant be reached but very small “detail” things as well. Eg.: i might want to have bullet holes appear on the leaves of a plant but other than placing the hole on them the leaves don’t need to interact with the bullet or the physics system in any way. So creating an area or physics body just for this purpose seems like overkill.
Ig thats where most of my confusion comes from, to me saying sth is a “spectrum” always evokes sth along the lines of
gay <--------------------> straight
(ie one dimensional) with things mapping into this interval. But ig if you also include more than one axis in your meaning of “spectrum” there wouldn’t be as straight forward of an ordering for any given “spectrum”. + Like @[email protected] said technically even the 1 dimensional spectrum can have more than one order and the “obvious” one is just obvious because we are used to it from another context not because its specifically relevant to this situation.