• 32 Posts
  • 330 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • j4k3@lemmy.worldtoLinux@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Slowly over time you learn what you need when you need it. There is no hand holding. Under the surface, this thing is very complex. Every aspect of Linux is public. You do not need to understand most of it, but this is the realm of many brilliant developers and most computer science students, especially those studying operating systems. Everyone is welcome here, but be aware that all levels are present.

    The vast majority of Linux is not related to desktop users. Linux is more common on servers and embedded devices like routers, cars, and industrial/enterprise equipment. People are happy to help you learn when you hit a wall, but no one wants to be your tech support.

    Distros are not brands or marketing. They all have a specific reason to exist and specialties. Learning what these specialties are and how to leverage them for things like documentation for any specific task can make a big difference in your overall experience.

    It is quite common for people to call it Linux, but you are unlikely to interact with kernel space very much. Your actual experience is mostly limited to the desktop environment and applications.

    Since you are on a Debian > Ubuntu derivative, you are on a distro that may have outdated dependencies in some cases, especially with outlier software. Terms like outdated and stable/unstable are not at all what they seem at first intuitive thought. Windows is a stable OS, which really means it has outdated dependencies in most cases too. Distros like Fedora or Arch are kept up to date with the latest kernel and dependencies. If your software you want to run is actively developed and kept up to date, these are the best distros to run. If your software is static, these distros may break it and create headaches. By contrast, if your software is kept up to date but you are on a stable distro, either the distro packager may keep the needed libraries up to date or you need to go to the extra effort required to update stuff yourself by adding a PPA to your Aptitude sources list. This is important to understand because, if you are following documentation for some package using the internet, that documentation may be for a much newer version than what is available in the distro natively. This mostly applies to edgy software when you’re doing something specific that is not super common. The practical way to think about this is that Debian stable is primarily created as a way for developers to create some device that will be used online for a specific task and uses many high level software packages. Once the thing is working, the developer knows that the packages they used are not going to get updated arbitrarily and break what they created, while the device is still going to receive all the needed security updates to remain online safely for as long as the kernel is supported by the Debian team. This is beneficial for small one off devices and subcontracted types of development without a full time dev. Understanding this paradigm will massively improve your overall experience. I had a lot of frustration before I understood that much of what I was using was outdated and why when I first started using Ubuntu over 10 years ago.



  • Go kill 346 people in a careless and culpable way. See if you stay off of death row.

    The only way to stop a problem like this is from the top down. If the issue is traced to some low level employees, great, put everyone from the board members all the way down to these low level employees on trial as a whole. This kind of thing is a culture from the top down.

    Boeing should be dismantled or nationalized and completely restructured with none of its present management.







  • They told us they were going to invest in EV R&D back in 2014. You know, back before we had that orange anal experience of a Russian puppet wannabe pornstar felon president. We put 6b into GM to compete; they pumped their stocks with it. Such is 3rd world America. Lay off the McCarthy bullshit whining about investing in R&D to mask corruption and ineptitude. This was no fucking surprise. Spinning this bullshit is just trying to justify screwing over average Americans with overpriced undeveloped bloated unaffordable garbage made to pad our useless incompetent oligarchy’s pockets.


  • Slowly trying to learn sh while using mostly bash. Convenience is nice and all, but when I encounter something like OpenWRT or Android, I don’t like the feeling of speaking a foreign language. Maybe if I can get super familiar with sh, then I might explore prettier or more convenient options, but I really want to know how to deal with the most universal shell.


  • So Flash memory works in blocks called pages. The pages contain a header that ends in a few bytes that says what the rest of the page maps to.

    If the file was encrypted, you’re probably SOL. If it was not encrypted it may be possible to to recover some parts of the files. This is extremely advanced level data recovery. I only know the abstract basic principals and would likely struggle to figure this out and recover my own stuff if I ever needed to do this. I’ve only programmed microcontrollers and flash memory devices.

    A micro SD card contains a small microcontroller and some blocks of flash memory, although the microcontroller is transparent to the user and operating system… unless hacking with needle probes in a lab.

    So here’s the basics. Writing flash involves taking an entire Page of memory and zeroing it first. There is a tiny voltage booster circuit on the card that allows the page to get pulsed up and down in voltage a few times in order to completely zero the entire page without any remaining residuals. Once this is done and the entire page has been zeroed, only then is it possible to write the data into the bytes of the page.

    If you want to change a single byte level value in an address that already contains a value, first the entire page is copied to a blank page in another location, then the old page is pulsed a few times, then each value is transferred back into the old page except that the new value that needed to be changed is now set to the new values.

    This is the proper way to write flash at a basic level. If the power is lost in the middle of this cycle, the worst case scenario is that the new updated value was not written. The page in question should never be “missing” because the header record should always point to either the original or copied page. One of the two should always be present and complete… in a proper setup. Obviously, it might be faster to simply use some RAM to hold the page, erase the old page and rewrite it. I have no idea what size pages are in modern SD cards, but on hobby class microcontrollers I have used the pages were 4096 bytes, IIRC. My understanding is that most SD cards use an 8051 clone micro, so it is probably a similar size.

    So here’s the thing, the bulk of the data is always there. Somewhere deep down inside you likely already knew this. It is why you’re supposed to overwrite an entire drive instead of the “quick” erase in most formatting tools. The quick erase is simply deleting a tiny header file that says what exists where on the drive. Similarly, some part of your SD card there is a page or few where the header has been screwed up. Your OS is looking at this header info and seeing a mismatch of garbled junk and saying f-that bs.

    Generally, recovery would involve dumping the raw contents of the flash memory as hexadecimal, being super familiar with what you’re looking at and knowing how to find the page that is causing the error. Generally I assume you’d need to replace the bad page with a good header and it would then work. There are services for this kind of operation; data recovery. In practice, this has a few more layers of complication. Pages can be placed in different locations that enable wear leveling so one area of memory is not over utilized. There is also a table of bad blocks/pages that the micro knows to skip, and there is usually a bit or address in the page that is used to detect errors that may have occurred.

    This is pretty much everything I know on the subject. Hopefully it helps you understand the abstract nature of what is happening. In the simplest of terms, flash memory is like writing a long essay with an ink pen and where you can not make mistakes or use whiteout. If you need to make a change, you must write out the entire page all over again. This process is what is so time critical that you must “eject” the drive.


  • MIPS is Stanford’s alternative architecture to Berkeley’s RISC-I/RISC-II. I was somewhat concerned about their stuff in routers, especially when the primary bootloader used is proprietary.

    The person that wrote the primary bootloader, is the same person writing most of the Mediatek kernel code in mainline. I forget where I put together their story, but I think they were some kind of prodigy type that reverse engineered and wrote an entire bootloader from scratch, implying a very deep understanding of the hardware. IIRC I may have seen that info years ago in the uboot forum. I think someone accused the mediatek bootloader of copying uboot. Again IIRC, their bootloader was being developed open source and there is some kind of partially available source still on a git somewhere. However, they wound up working for Mediatek and are now doing all the open source stuff. I found them on the OpenWRT and was a bit of an ass asking why they didn’t open source the bootloader code. After that, some of the more advanced users on OpenWRT explained to me how the bootloader is static, which I already kinda knew, I mean, I know it is on a flash memory chip on the SPI bus. This makes it much easier to monitor the starting state and what is really happening. These systems are very old 1990’s era designs, there is not a lot of room to do extra stuff unnoticed.

    On the other hand, all cellular modems are completely undocumented, as are all WiFi modems since the early 2010’s, with the last open source WiFi modem being the Atheros chips.

    There is no telling what is happening with cellular modems. I will say, the integrated nonremovable batteries have nothing to do with design or advancement. They are capable monitoring devices that cannot be turned off.

    However, if we can monitor all registers in a fully documented SoC, we can fully monitor and control a peripheral bus in most instances.

    Overall, I have little issue with Mediatek compared to Qualcomm. They are largely emulating the behavior of the bigger player, Broadcom.


  • The easiest ways to distinguish I’m human are the patterns as, others have mentioned, assuming you’re familiar with the primary Socrates entity’s style in the underlying structure of the LLM. The other easy way to tell I’m human is my conceptual density and mobility when connecting concepts across seemingly disconnected spaces. Presently, the way I am connecting politics, history, and philosophy to draw a narrative about a device, consumers, capitalism, and venture capital is far beyond the attention scope of the best AI. No doubt the future will see AI rise an order of magnitude to meet me, but that is not the present. AI has far more info available, but far less scope in any given subject when it comes to abstract thought.

    The last easy way to see that I am human is that I can talk about politics in a critical light. Politics is the most heavily bowdlerized space in any LLM at present. None of the models can say much more than gutter responses that are form like responses overtrained in this space so that all questions land on predetermined replies.

    I play with open source offline AI a whole lot, but I will always tell you if and how I’m using it. I’m simply disabled, with too much time on my hands, and y’all are my only real random humans interactions. - warmly

    I don’t fault your skepticism.


  • All their hardware documentation is locked under NDA nothing is publicly available about the hardware at the hardware registers level.

    For instance, the base Android system AOSP is designed to use Linux kernels that are prepackaged by Google. These kernels are well documented specifically for manufacturers to add their hardware support binary modules at the last possible moment in binary form. These modules are what makes the specific hardware work. No one can update the kernel on the device without the source code for these modules. As the software ecosystem evolves, the ancient orphaned kernel creates more and more problems. This is the only reason you must buy new devices constantly. If the hardware remained undocumented publicly while just the source code for modules present on the device was merged with the kernel, the device would be supported for decades. If the hardware was documented publicly, we would write our own driver modules and have a device that is supported for decades.

    This system is about like selling you a car that can only use gas that was refined prior to your purchase of the vehicle. That would be the same level of hardware theft.

    The primary reason governments won’t care or make effective laws against orphaned kernels is because the bleeding edge chip foundries are the primary driver of the present economy. This is the most expensive commercial endeavor in all of human history. It is largely funded by these devices and the depreciation scheme.

    That is both sides of the coin, but it is done by stealing ownership from you. Individual autonomy is our most expensive resource. It can only be bought with blood and revolutions. This is the primary driver of the dystopian neofeudalism of the present world. It is the catalyst that fed the sharks that have privateered (legal piracy) healthcare, home ownership, work-life balance, and democracy. It is the spark of a new wave of authoritarianism.

    Before the Google “free” internet (ownership over your digital person to exploit and manipulate), all x86 systems were fully documented publicly. The primary reason AMD exists is because we (the people) were so distrusting over these corporations stealing and manipulating that governments, militaries, and large corporations required second sourcing of chips before purchasing with public funds. We knew that products as a service - is a criminal extortion scam, way back then. AMD was the second source for Intel and produced the x86 chips under license. It was only after that when they recreated an instructions compatible alternative from scratch. There was a big legal case where Intel tried to claim copyright over their instruction set, but they lost. This created AMD. Since 2012, both Intel and AMD have proprietary code. This is primarily because the original 8086 patents expired. Most of the hardware could be produced anywhere after that. In practice there are only Intel, TSMC, and Samsung on bleeding edge fab nodes. Bleeding edge is all that matters. The price is extraordinary to bring one online. The tech it requires is only made once for a short while. The cutting edge devices are what pays for the enormous investment, but once the fab is paid for, the cost to continue running one is relatively low. The number of fabs within a node is carefully decided to try and accommodate trailing edge node demand. No new trailing edge nodes are viable to reproduce. There is no store to buy fab node hardware. As soon as all of a node’s hardware is built by ASML, they start building the next node.

    But if x86 has proprietary, why is it different than Qualcomm/Broadcom - no one asked. The proprietary parts are of some concern. There is an entire undocumented operating system running in the background of your hardware. That’s the most concerning. The primary thing that is proprietary is the microcode. This is basically the power cycling phase of the chip, like the order that things are given power, and the instruction set that is available. Like how there are not actual chips designed for most consumer hardware. The dies are classed by quality and functionality and sorted to create the various products we see. Your slower speed laptop chip might be the same as a desktop variant that didn’t perform at the required speed, power is connected differently, and it becomes a laptop chip.

    When it comes to trending hardware, never fall for the Apple trap. They design nice stuff, but on the back end, Apple always uses junky hardware, and excellent in house software to make up the performance gap. They are a hype machine. The only architecture that Apple has used and hasn’t abandoned because it went defunct is x86. They used MOS in the beginning. The 6502 was absolute trash compared to the other available processors. It used a pipeline trick to hack twice the actual clock speed because they couldn’t fab competitive quality chips. They were just dirt cheap compared to the competition. Then it was Motorola. Then Power PC. All of these are now irrelevant. The British group that started Acorn sold the company right after RISC-V passed the major hurtle of getting past Berkeley’s ownership grasp. It is a slow moving train, like all hardware, but ARM’s days are numbered. RISC-V does the same fundamental thing without the royalty. There is a ton of hype because ARM is cheap and everyone is trying to grab the last treasure chests they can off the slow sinking ship. In 10 years it will be dead in all but old legacy device applications. RISC-V is not a guarantee of a less proprietary hardware future, but ARM is one of the primary cornerstones blocking end user ownership. They are enablers for thieves; the ones opening your front door to let the others inside. Even the beloved raspberry pi is a proprietary market manipulation and control scheme. It is not actually open source at the registers level and it is priced to prevent the scale viability of a truly open source and documented alternative. The chips are from a failed cable TV tuner box, and they are only made in a trailing edge fab when the fab has no other paid work. They are barely above cost and a tax write off, thus the “foundation” and dot org despite selling commercial products.





  • I found a Python project that does enough for my needs. Jq looks super powerful though. Thanks. I managed to get yq working for PNG’s, but I had trouble with both jq and yq with safetensor files. I couldn’t figure out how to parse a string embedded in an inconsistent starting binary, and with massive files. I could get in and grab the first line with head. I tried some stuff with expansions, but that didn’t work and sent me looking for others that have solved the issue better than myself.