![]() |
|
This thread is locked; no one can reply to it.
![]() ![]() |
1
2
|
[C/D/Rust/etc] Enjoyably writing perfomant, cross-platform native code...? |
Mark Oates
Member #1,146
March 2001
![]() |
This is a good read too by Mike Acton: I have to admit, it's a bit of a mindshift. I like the direction it goes, though. OOP does have a tendency to factor problems off into different domains, which then need their own set of solutions to solve. What I like about the data-oriented approach is that there's logistical practical reasoning behind it (speed, cache, system), as opposed to an arbitrary data model who's architecture needs to be pontificated in each attempt to use it (dogma, as Acton calls it). And that's so true. It's tough as a developer to navigate everybody's dogmas. It's impossible to write code that fulfills everybody's expectations about how good code should be written. At least with this approach, there's a logistical purpose behind it, as opposed to an attempt to fulfill somebody else's expectations. -- |
bamccaig
Member #7,536
July 2006
![]() |
It's a challenge to find the time since I'm playing through FFVII again right now, but I will say that I am amused that he is creating slides by writing on paper/sticky notes and photographing them. A part of me imagines this was easier than the software available to do it... On that note, I've begun learning LaTeX, and I have grand dreams for the future of my own documentation and/or writing. -- acc.js | al4anim - Allegro 4 Animation library | Allegro 5 VS/NuGet Guide | Allegro.cc Mockup | Allegro.cc <code> Tag | Allegro 4 Timer Example (w/ Semaphores) | Allegro 5 "Winpkg" (MSVC readme) | Bambot | Blog | C++ STL Container Flowchart | Castopulence Software | Check Return Values | Derail? | Is This A Discussion? Flow Chart | Filesystem Hierarchy Standard | Clean Code Talks - Global State and Singletons | How To Use Header Files | GNU/Linux (Debian, Fedora, Gentoo) | rot (rot13, rot47, rotN) | Streaming |
Mark Oates
Member #1,146
March 2001
![]() |
Hey Aaron, how do you support a 3D model in that architecture? Do you have a large VBO and allocate n vertexes every time you create an entity? Since models do not have a fixed size of vertexes (or even the same vertex format) how do you typically store and manage that? -- |
Erin Maus
Member #7,537
July 2006
![]() |
Rendering is distinct from an ECS. You don't store resources in an ECS because now you're coupling graphics and data. Most often you'll have a system send some request to your renderer. Your renderer would be responsible for resource management, and modern rendering techniques (outside of lower-level APIs like DirectX 12; I can't speak for experience with them until I can possibly upgrade my GPU and Vulkan is released, which I patiently await) depend on the GPU driver managing resource storage. Even animated models VBOs or particles can be managed by completely using the GPU features because of feedback buffers (and specialized render targets), so no CPU -> GPU transfer has to be done when skinning a model or updating particle positions/traits. Keeping your ECS essentially decoupled from rendering is another important requirement. Especially if you want framerate independent logic and rendering. You can have logic run at 60 FPS and the renderer spit out interpolated 140 FPS graphics, for example. You'd store the previous two logic updates and render some interpolated value between them based on rendering speed. This means rendering is 1-2 frames behind, depending on how you implement, though. This also means you can have rendering and logic on two threads to ease performance issues. The rendering thread can be doing work while the logic thread is doing its tasks. Think of it this way: in a normal game loop, you have to split ~16.6 ms between logic and rendering on a single core assuming a 60 FPS target (goodness forbid you want permit higher rendering, like 140+ FPS). Any stalls on the CPU side when emitting graphics commands would limit how much time can be spent performing logic, and vice versa. V-sync waits are often a big one. And unlike mulithreaded game logic, separating logic and rendering in such a way is much easier and provides a great performance boost if rendering or logic are bottlenecks in some way. --- |
bamccaig
Member #7,536
July 2006
![]() |
Chris Katko said: This talk opened my eyes:
Sounds legitimate, but I think he's preaching to the wrong group of people. I don't imagine what he's proposing would work well for all software. Certainly for software that needs to be fast (i.e., games), but a lot of software doesn't need to be fast and being easy to understand and change is more important... I'm skeptical that what he proposes is more understandable or maintainable. It's probably better to reason about how it fits into the machine, but business software doesn't really work that way... You can just throw more machine at it if necessary. The programmer's time is always worth more than the machines. The games industry is weird in the sense that the client is entirely disconnected from the developer. The developer is forced to make the solution work within a finite set of hardware because there is no relationship with the client to negotiate that. And the games industry would enslave 100 programmers with poor working conditions and insane deadlines to make it fit within the constrains of the system. In the business world, that wouldn't fly, and buying more hardware would probably be more cost effective than engineering every last byte of data in the software... I'm still open to exploring this technique, but I think you need an intimate understanding of CPU architectures to really do it effectively. -- acc.js | al4anim - Allegro 4 Animation library | Allegro 5 VS/NuGet Guide | Allegro.cc Mockup | Allegro.cc <code> Tag | Allegro 4 Timer Example (w/ Semaphores) | Allegro 5 "Winpkg" (MSVC readme) | Bambot | Blog | C++ STL Container Flowchart | Castopulence Software | Check Return Values | Derail? | Is This A Discussion? Flow Chart | Filesystem Hierarchy Standard | Clean Code Talks - Global State and Singletons | How To Use Header Files | GNU/Linux (Debian, Fedora, Gentoo) | rot (rot13, rot47, rotN) | Streaming |
Peter Hull
Member #1,136
March 2001
|
Matthew Leverton said: Once in a while bambam says something that isn't 100% ridiculous, and I have to check his IP to make sure his account wasn't hacked.
Matthew! He's doing it again! [edit] I didn't like the way he went through that guy's code in the post-it presentation. I accept that he's probably forgotten more than I'll ever know about machine-level optimisation but there's no need to take that tone.
|
Mark Oates
Member #1,146
March 2001
![]() |
I was thinking the same thing, Yea, bam, I totally agree. I'll still work towards programming proficiently using data-oriented design, though. I think that would be a valuable skill in any company that needed so scale. -- |
Erin Maus
Member #7,537
July 2006
![]() |
bamccaig said: Certainly for software that needs to be fast (i.e., games), but a lot of software doesn't need to be fast and being easy to understand and change is more important... Fast software doesn't mean it's hard to understand or maintain. It also doesn't mean general purpose software has to be slow or poorly optimized. I mean, Facebook's mobile app has an enormous binary with over 18,000 classes. Do you think that's maintainable? Do you think requiring a high-end phone for what should be a light-weight application is reasonable? For example, if you ran Windows 3.1 on compatible modern-day hardware, you'd be amazed at how fast it runs. Windows 3.1 wasn't slow because it was poorly optimized, it was slow because of hardware limitations. But there are some software design decisions that will have modern software remain sluggish even on hardware from the future. And for the record, better tools (namely better programming languages) could make writing efficient code easier. Your typical OOP high-level language doesn't necessarily enable writing more maintainable code (often it enables writing crappier code). However, your typical OOP high-level language has mechanisms that are contrary to how the hardware has worked (at a high level) for decades and how hardware works (at a lower level) currently and for the foreseeable future. Fragmented memory and extensive virtual calls and unpredictable branching all make it much harder for your code to not only scale on better hardware but also run well on inferior hardware. The same goes for existing APIs and so on. While OpenGL and DirectX may be easy to use, they are far from how the hardware actually works. For example, compiling shaders from GLSL source or HLSL binaries you upload to the GPU is incredibly slow. But did you know shaders have to be recompiled when changing framebuffer targets? There's no indication of this if you simply know OpenGL or DirectX, so you inadvertently can write pretty poor code without even knowing it. Abstractions (either at the language or API level) with bizarre side effects are considered poorly designed. And yes, "running slowly" is a bizarre side effect --- |
Thomas Fjellstrom
Member #476
June 2000
![]() |
Aaron Bolyard said: For example, if you ran Windows 3.1 on compatible modern-day hardware, you'd be amazed at how fast it runs. It'd actually be comparatively slow Modern PC hardware takes A LOT of detection and initialization, and software handling of various events. -- |
Erin Maus
Member #7,537
July 2006
![]() |
Thomas Fjellstrom said: It'd actually be comparatively slow I don't think it's fair or surprising an ancient operating system that doesn't have drivers (or support) for modern hardware will not take advantage of modern hardware. That wasn't my point. I said Windows 3.1 runs surprisingly fast on modern hardware. This is true. It was slow due to poor hardware (extremely slow HDDs, tiny amounts of RAM necessitating swapping to said slow HDD, just-above-double-digit megahertz processors). This is also true. My point was: Windows 3.1 scales very well with hardware. Much better than a lot of modern crap Big Soft Corps crap out every day. This is because modern software will potentially always be slow or inefficient, especially as we approach cycle limitations (due to heat) which reduces the jump in single-core processing power massively (compared to the 1990s to now). Crysis (the original) will forever run inefficiently, especially compared to later releases of the series and competing games, because of its terrible DirectX 9/10 hybrid API usage hacks. You throw a Nvidia 980 Ti at it and you'll end up worth worse graphics and worse performance than a modern Unreal game. There will come a point where you can't throw hardware at a problem due to physical limitations. An algorithm that only scales with single-threaded processing power can't be sped up by increasing cores or adding auxiliary hardware. Single-purpose programs that eat up most available RAM don't play friendly with multitasking (I'm referring to Chrome). And on and on... But still, many pine performance doesn't matter, developer time is more valuable, on and on. Short-term thinking is a terrible side-effect of the consumer culture that's been created, and it effects everything it seems, from business decisions to programmers' mindsets... --- |
SiegeLord
Member #7,827
October 2006
![]() |
I kind of think that ECS makes sense from both the performance angle and a productivity angle. When I write ECS systems I typically don't go nuts with the cache optimizations, I just use a simple array of all entities which simply contain every possible state. If performance starts mattering, I can easily optimize it since I decoupled data from behavior, but even ignoring that, I find that it makes it easy to add new code without modifying some inheritance ziggurat. I think data-driven development makes sense from every perspective. "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18 |
Thomas Fjellstrom
Member #476
June 2000
![]() |
Aaron Bolyard said: My point was: Windows 3.1 scales very well with hardware. Much better than a lot of modern crap Big Soft Corps crap out every day. I know what you're trying to say, but I think its inaccurate. Windows 3.1 is not scalable. There is a very good reason why linux isn't capable of running (without a lot of work) on a 286 with 1MB or less ram. These days you're lucky if you can get it on less than 16MB ram let alone 4 or 1. It's BECAUSE it is scalable (up and down) to multiple orders of magnitude in power hardware. Windows 3.1 was written the way it was because it had to be. It would not be possible to properly scale it up to take advantage of modern hardware. There are certain things in the hardware that in order to take advantage of, requires a complete re-architecting of your system. I do however agree with your core point. Some software is written with no eye for performance. These days though, it often doesn't matter. In Chrome's case, it was written for pure speed originally, which is why it uses so much ram. They sacrificed ram for speed. Sadly later updates also made it slower and more bloated, so it isn't quite as nice to use as it was originally. Same thing happened to Firefox -- |
Chris Katko
Member #1,881
January 2002
![]() |
I'm not really sure what the argument is at this point. Linux could absolutely be adapted to do anything Windows 3.1 could do. Once you rip out all the features that Linux has over Window 3.1 (bluetooth, firewall, USB, all that jazz) you'd still end up with a much cleaner system, and a much more advanced system. But that should be a given because we, as a programming community, have learned so much more than we used to know. But all that said, I would say that the bloat in Linux (and Windows) is MUCH higher than the kernels except for rare situations. One of the biggest reasons that software is bloated today is because everything is designed (the libraries, the compilers, and our end-user programs) on top of modern hardware. People only optimize when they see a problem. "Close enough" is the rule of thumb. So people use less inefficient means, protocols, algorithms and solutions because they don't directly see any problems. So, while a developer should use a GREAT machine to minimize productivity losses while developing, the software should always then be tested on an old, piece of junk, so that the inefficiencies become very apparent. Profiling be damned... if a user doesn't see a problem, it's not a problem. And if the profiler doesn't show a problem but the user sees a problem, it is a problem. -----sig: |
Erin Maus
Member #7,537
July 2006
![]() |
Ugh. I'm not going to continue discussing the merits of writing fast code. I wasn't asking opinions for if I should develop an efficient library or use some bloated language with zero interop capabilities. Neither was I asking for a masochistic-but-portable language. And entity-component-systems aren't really relevant, either. I don't expect a topic to stay on-topic on Allegro.cc (and I don't think that's necessarily a bad thing). But I'll just outline the direction I'm going in because I did open this topic for general advice and comments on choosing a direction. So I installed FreeBSD on my primary computer and am almost done setting it up for a development environment. The gist of why I settled on FreeBSD is because the other options are poor choices for my needs. I don't like how I can't control Windows at the level I want, mostly theming and customization, but also integrated quirks related to "user friendliness" at the cost of an efficient workflow. Doesn't help that Windows 8.x and 10 are heading in a volatile direction, with integrated ads and unauditable or unverifiable telemetry data. The political climate against encryption and security doesn't settle with me either, and Windows is closed-source, so it's very difficult to audit if said FUD results in legislation. I didn't choose some Linux distribution because of the toxic GPL license. In spirit, I agree copyrights and patents are immoral; they make knowledge and creativity a commodity, which is simply absurd--but forcing someone to release their contributions, and forcing future persons to release their contributions, under this philosophy isn't very wise because free doesn't pay bills. Providing support for a fee isn't always an option, or at least a livable one, and working for a company that doesn't is more often a dream than a reality. Also, I do hold Linux distributions are very amateurish (in regards to consistency and documentation) compared to the BSDs or commercial operating systems (yes, this includes Windows and Mac OS X). The die-hard Linux community (which is, from anecdotal experience, a large portion of desktop Linux users, for better or worse) is often pants-on-head spiteful and toxic. FreeBSD doesn't have the blackbox security issues found in Windows. It doesn't have advertising integrated into the operating system. I have an encrypted filesystem protected with a lengthy passphrase, so I can be comfortable knowing my data would require technology unavailable to anyone but the NSA (and perhaps not even them--but who knows) to bypass said encryption. Future laws won't suddenly make my installation less secure should I choose to update. I can customize it exceedingly well--I'm even going to work on embedding an OpenGL scene as the desktop so I can have simply animated backgrounds, and I'll theme Xfce accordingly. And because of the consistency and documentation, I'll run into bizarre situations much less often than Linux distributions. I also won't have to deal with said Linux community. So shortly I'll begin developing my vector graphics library in Rust, unless I run into unforeseen problems with the language (however I see this unlikely thus far). Said library will spit out data tuned for various hardware-accelerated rendering targets. My initial target will be forward-compatible OpenGL 3-minmimum hardware. The library has to scale down to mobile platforms and up to high-end dedicated platforms and it has to run reasonably well to support real-time situations, such as rendering entire games at favorable speeds (i.e., comparable to whatever said user can run with mid-range graphics settings at their preferred resolution). This can't be done in existing open source solutions (especially since none of them even compare to algorithms or techniques I used in the Algae.Canvas proof-of-concept and will use in this new library), and propriety solutions aren't viable and/or aren't cross-platform. And I'm choosing Rust over C or C++ because I don't want to be stressed by poor language features. I also don't want to be stressed by poor tools. I'm trying combat my mental illness, and stress makes it worse. So regardless of whatever other options compared to Rust exist, none seem suitable given my requirements. So there we go. edit: Clarified and fixed all the things... Hopefully. --- |
Thomas Fjellstrom
Member #476
June 2000
![]() |
I don't really agree with all of your points, but I'm glad you picked a language, platform, and wish you luck -- |
Chris Katko
Member #1,881
January 2002
![]() |
I for one, am angry, that he found something he likes. -----sig: |
Peter Hull
Member #1,136
March 2001
|
I'm happy that Chris Katko is angry Seriously though for a moment - a note of caution - I don't think that any of the BSDs are 'primary platforms' for Rust. There is definitely work going to to support it on BSD but you might spend more of your time fighting bugs in Rust than you'd like. Would definitely be interested to hear all about your progress. [edit] Current builds are all failing for BSD
|
Erin Maus
Member #7,537
July 2006
![]() |
FreeBSD has Rust 1.3.0 and Cargo 0.4 in ports, which is a version old by now. However, I just finished compiling Rust 1.4.0 and Cargo 0.6. Checked them out via git and compiled the aforementioned targets. Went smooth except for one tiny bit where gmake used make when executing children makefiles, but that was an easy fix by running gmake MAKE=gmake. I tested cargo and rustc on a few open source Rust projects so far of varying complexity and no problems building or running. *edit:* Accidentally posted before I finished writing, ugh. --- |
SiegeLord
Member #7,827
October 2006
![]() |
Woohoo! Hopefully this stint with Rust won't make your other language skills rusty "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18 |
Peter Hull
Member #1,136
March 2001
|
Interesting article on BSD:
|
Erin Maus
Member #7,537
July 2006
![]() |
FreeBSD is pretty awesome indeed That's where I had a pretty decent idea. I'm probably going to get a few hardware upgrades over the next few months. I'll need a new SSD, more RAM, and a new GPU... And then I'll redo my setup. Install Xen with FreeBSD as the host OS, and install Windows 10 as a guest. Keep my Nvidia GPU (which doesn't support pass-through; only Nvidia's workstation/professional cards do) for FreeBSD stuff, but have an AMD GPU as a pass-through card for Windows 10 Pro. Windows 10 would simply be for gaming and cross-platform testing. I could dual-boot, but dual-booting is a pain, especially because my FreeBSD volume is encrypted with a 50+ character passphrase. Booting Windows in Xen should be faster than restarting the computer, anyway, I'd think. edit: Oh, seems FreeBSD doesn't play well with PCI pass-through at this moment. In the future then! edit 2: But FreeBSD has bhyve (a BSD alternative to Xen, it seems), which could work... --- |
|
1
2
|