• 0 Posts
  • 95 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle



  • It sounds like she’s constructed two competing versions of you in her mind—an idealized version that always understands and sympathizes with her, and a second version constructed from all the times you’ve failed to live up to those expectations.

    If you can’t be her idealized version of yourself, you can demonstrate that you’re not the second version, either. Focus on proactively doing things for her when she’s not expecting you to—everything you do that doesn’t match what her mental model of you predicts you’ll do will weaken that model in her head.











  • “The monkey about whose ability to see my ears I’m wondering”.

    Part of the issue is that the thing you’re wondering about needs to be a noun, but the verb “can” doesn’t have an infinitive or gerund form (that is, there’s no purely grammatical way to convert it to a noun, like *“to can” or *“canning”). We generally substitute some form of “to be able to”, but it’s not something our brain does automatically.

    Also, there’s an implied pragmatic context that some of the other comments seem to be overlooking:

    • The speaker is apparently replying to a question asking them to indicate one monkey out of several possibilities

    • The other party is already aware of the speaker’s doubts about a particular monkey’s ear-seeing ability

    • The reason this doubt is being mentioned now is to identify the monkey, not to express the doubt.


    • I don’t think it’s useful for a lot of what it’s being promoted for—its pushers are exploiting the common conception of software as a process whose behavior is rigidly constrained and can be trusted to operate within those constraints, but this isn’t generally true for machine learning.

    • I think it sheds some new light on human brain functioning, but only reproduces a specific aspect of the brain—namely, the salience network (i.e., the part of our brain that builds a predictive model of our environment and alerts us when the unexpected happens). This can be useful for picking up on subtle correlations our conscious brains would miss—but those who think it can be incrementally enhanced into reproducing the entire brain (or even the part of the brain we would properly call consciousness) are mistaken.

    • Building on the above, I think generative models imitate the part of our subconscious that tries to “fill in the banks” when we see or hear something ambiguous, not the part that deliberately creates meaningful things from scratch. So I don’t think it’s a real threat to the creative professions. I think they should be prevented from generating works that would be considered infringing if they were produced by humans, but not from training on copyrighted works that a human would be permitted to see or hear and be affected by.

    • I think the parties claiming that AI needs to be prevented from falling into “the wrong hands” are themselves the most likely parties to abuse it. I think it’s safest when it’s open, accessible, and unconcentrated.



  • This isn’t great, but it’s what I ended up resorting to for my mom who refused to use any service, browser setting, or saved file:

    • Make a “master” password with upper-case characters and digits (e.g., M45T3R). Memorize it or write it down.

    • Interleave the characters with those of the domain the password is for (e.g., for google.com: gMo4o5gTl3eR). She can type the master password first, then put the cursor at the start and type each letter of the domain name hitting the right arrow after each letter.

    As long as she remembered the master password, she could reconstruct the others on the fly. A human could still look at the result and figure out the pattern, but at least it protected her from automated tools.