Almael's Prestige-class TM & bits

here you can post your graphics from various TV shows and movies
Sat 17. Jan 2015, 23:53


  • It's about half written. It's been quite some long since I worked on it so I've some doubt about finishing it. This project started around the time Enterprise aired. It's the latest of my TM project.

    The Prestige class was meant to be around the 2410-20 years or about the time of the E-F or E-G. Now that Star Trek online has an E-F, hence, overtaking my material. I'm not really sure about finishing it. It probably still has some stuff not seen in the "wild" yet or in any novels etc...

    Oh, well.

    TurbolistShaft0.png

    This design is meant to increase support for emergency situations. No matter where someone is stuck inside the tube or where it is damage there's always some option to move around. Also it offers enough space at the doors for even pregnant women to take shelter from "falling" or moving turbolift cabins.
    You do not have the required permissions to view the files attached to this post.
    User avatar
    Almael
    ocossium
    ocossium
     
    Posts: 141
    Joined: Mon 12. Jan 2015, 13:02



  • I like this design of Turbolift shaft design :)
    If your not laughing, Your not living ~ Carlos Mencia
    User avatar
    Capt. Kaiser
    ocossium
    ocossium
     
    Posts: 183
    Joined: Mon 29. Dec 2014, 02:13
    Location: Boyertown PA



  • Thank you.
    The TV design had bothered me so I thought I would do a safer one. Of course I realized the TV version is meant for drama and heroism. :lol:
    User avatar
    Almael
    ocossium
    ocossium
     
    Posts: 141
    Joined: Mon 12. Jan 2015, 13:02




  • Lets talk a bit about Federation computers. It's one of the major section for obvious reasons. And I want to give some insights into the train of thoughts.

    But before I start the discussion lets look at some quotes from the ST:TNG and a bit of my ranting first to show the recurring troubles fans have.

    ST:TNG page 49
    The heart of the main computer system is a set of three
    redundant main processing cores. Any of these three cores
    is able to handle the primary operational computing load of the
    entire vessel.

    Each main core incorporates a series of miniature subspace
    field generators, which creates a symmetrical (nonpropulsive)
    field distortion of 3350 millicochranes within the faster-thanlight
    (FTL) core elements.

    Core elements are based on FTL nanoprocessor units
    arranged into optical transtator clusters of 1,024 segments. In
    turn, clusters are grouped into processing modules composed
    of 256 clusters controlled by a bank of sixteen isolinear
    chips. Each core comprises seven primary and three upper
    levels, each level containing an average of four modules.

    Memory storage for main core usage is provided by 2,048
    dedicated modules of 144 isolinear optical storage chips.
    Under LCARS software control, these modules provide average
    dynamic access to memory at 4,600 kiloquads/sec. Total
    storage capacity of each module is about 630,000 kiloquads,
    depending on software configuration.


    What I want to rant about is that "The Computers of Star Trek" book and other people wonder whether the 630,000 kiloquads is for one computer core or for one module.
    Like other issues if only people actually read the TM and think more then they would know the answer and that it's right there.
    For one the TM says the core are made up of 10 levels (=decks) with 4 modules each. And each module has 256 clusters (=cabinet) which are controlled by 16 isolinear chips (=interconnect controllers). Each cluster consist of 1,024 segments (FTL nanoprocessor units=node). Segments are what we would call SOC (systems on chip). As you see I have translated things for the layman among us.
    So a module would has 262,144 nanoprocessors and people are wondering whether 294,912 storage chips are for a core or a module?
    And then people are wondering whether it's RAM (working memory) or harddrive (long-term storage). Really, if you got uniform memory technology it doesn't matter because it can obviously work in all possible modes or functions.
    The exact dedication is just a technicality or organising issue. Of course the division is necessary but we have dynamic management of memory since DOS was left behind two decades ago.
    From the looks of it it's all working memory. Btw. the17-1-D blueprints show dedicated memory banks (long-term storage).
    I really wonder if the authors actually know anything about computers because it doesn't look like it.
    User avatar
    Almael
    ocossium
    ocossium
     
    Posts: 141
    Joined: Mon 12. Jan 2015, 13:02



  • Now, lets go into the real discussion.
    How fast are Federation computers, actually? Well, there's no real good canon facts around and the one given for Voyager's is meh.
    As with most engineers you work backwards to solve a problem. I'm going to use real world terms and "facts" to stack out the need or requirements and the limits. The world of computers change every years so facts aren't necessary absolute, hence, they are subject to change. Needless, to say this section in the Prestige TM is also extensive but without the discussion or insights layed out here.

    Let's look at memory requirements for a given operation speed.

    Let's start from novice perspective...
    Obviously you would think the computer works best or smoothly if the amount of data it can processes and transfers for its "calculating" power is the same as the amount of memory needed.
    Giving a 64 bit Broadwell with 211.2 GFlops DP; 35-45 MB of total cache
    This gives 1689.6 GB of data being transferred around. Obviously, the majority of data is cached and processed in the execution and data cache.
    This gives a ratio or 1/1000 for the cache and 16/1689.6=0.00947 for the RAM. With 1000x the amount of RAM for storage I would be happy for 5+ years.
    Edit: oops was tempted by an easy number

    [hr]-----------------------------------------------------[/hr]
    Let's look at supercomputers
    the K computer
    8.162 petaflops with 80,000 2.0 GHz 8-core SPARC64 VIIIfx processors with 16 GB each
    This gives 2^3*10000 *2^37 ~2^44 bit of total working memory
    and a ratio of
    2^44 / 8.162*10^15 = 0.0021553768738564077432001960303847 bit memory per operation
    Way too efficient!!!

    Cray XK7
    has a theoretical peak performance of 27.1 petaFLOPS
    18,688 XK7 nodes, each containing an Opteron 6274 CPU with 32 GB of memory and a K20X GPU with 6 GB
    18688*2^38 ~2^53 bit
    and a ratio of
    2^53 / 27.1*10^15 =0.3323689761897045018450184501845 bit memory per operation

    Tianhe-2
    33.86 petaflops
    Each 16,000 nodes possess 88 gigabytes of memory.
    This gives 2^4*1000*2^36*11 ~2^54 bit of total working memory for the nodes
    and a ratio of
    2^54 / 33.86*10^15 = 0.53202594534796172474896633195511 bit memory per operation per node
    The total CPU plus coprocessor memory is 1,375 TB
    and a ratio of
    1375*2^43/33.86*10^15 = 0.35719515373703484937979917306556 bit memory per operation

    So we get 0.36 and 0.54 for the lower and upper limits for the working memory.
    Last edited by Almael on Thu 26. Feb 2015, 13:03, edited 1 time in total.
    User avatar
    Almael
    ocossium
    ocossium
     
    Posts: 141
    Joined: Mon 12. Jan 2015, 13:02



  • Let's look at speed requirement and amount of data stored.

    Let's look at real world sensor data processing complexity and storage.
    Signal processing can be quite complex depending on the type of sensors and other requirements.
    However, whether it's a neuronal net or any sort of signal generator they all use the same basic computational operations (mostly fourier transformation or linear-).

    Allen Telescope Array
    15 PByte for the 5-years are stored while max. sensor data rate is 8 GByte/s
    15*2^50/2^33(365*24*3600) = 0.062343987823439878234398782343988 of data is actually saved long-term
    Considering a TR-590 tricorder with 315 sensors scanning only 1 (500kHz) channel at a time.
    the calculative complexity would be
    5*10^5*((315 log2(1)(10) + ((315 (315+1)/2)*4*80) = 199.61088*10^12 operations per second for one readout


    Square Kilometer Array
    50 GB/s and 70 PB/y
    70*2^50 / 50*2^30(365*24*3600) = 0.046550177574835109081684424150178

    So about 4.7 - 6.3% is stored long-term.


    [hr]-----------------------------------------------------[/hr]
    So let's say a Galaxy class' main computer can do 10^33 (=2^110) instructions per second and is a 256-bit (=2^8) system (=data pack width)(ST:DS9 TM). This would require ~2^108 bit of cache memory, and between ~2^116 bit and ~2^117 bit of working memory.

    How much sensor data does a Galaxy class need to take care of?
    A navigational sensor package has 9 different sensors. The Galaxy got the main and the saucer group. The saucer group is virtually inactive. Then there are a total of 284 sensor pallets with 20 sensor types in 6pack-pallet-groups for a total of 947 sensors. Not impressive yet compared to a geological aircraft with 200-3000 sensor input streams.

    If each 956 sensor scans only 1000 channels at a time and on a band of just 5 Ghz.
    The calculative complexity would be
    5*10^9*((956 log2(1000)(10) + ((956 (956+1)/2)*4*80) =
    5*10^9*(95272.89776136955172+146382720)= 732.3899644888068477586*10^15 operations per second (per reading)

    If each 956 sensor scans 2000000 channels at a time on average and on a band of 100 Thz.
    The calculative complexity would be
    10^14*((956 log2(2000000)(10) + ((956 (956+1)/2))*4*80) =
    10^14*(200105.79552273910427384152247551+146382720) =14.658282579552273910427384152248*10^21 operations per second (per reading)

    If 10000 readings are normal and 30530000 are required at warp 9.9 and each is a 2^8 bit sample data packages generated per sensor per second then
    this gives
    14.658282579552273910427384152248e+21*30530000=447517367153730922485348038167.89
    =447.5174*10^27 operations per second
    and a transfer rate of
    956*2*10^6*3053*10^4*2^8 = 58373360000000000*2^8 ~58.373 EB ~2^64 bit sensor data/s
    requiring
    2^64*365*24*3600=581736521108504419762176000*0.063=36649400829835778445017088
    =2^85 bit of long-term storage each year


    [hr]-----------------------------------------------------[/hr]
    We will pass on internal sensors, warp and other systems requiring the computer core's services.

    Now the question is how much is assigned for personal use. For simplicity let's say 1/10^6 of computer power or 2^90 ops and that most of it is for the holodecks.
    Then the crew's holodeck addiction requires
    37.88*2^90*2^8
    ~2^104 bit of storage for 5 years.

    The minimum speed requirement for the Galaxy is hence 10^29 to 10^30 per second.

    So that's about where we start shaping up a ship's computer system.
    User avatar
    Almael
    ocossium
    ocossium
     
    Posts: 141
    Joined: Mon 12. Jan 2015, 13:02



  • Looking back there was actually a TNG episode where the computer power can be derived "The Chase".
    I do remember a similar episode as being asked here
    http://scifi.stackexchange.com/question ... took-hours
    but I'm positive it's a different episode.

    Anyway, so Picard wants a simulation over 4 billions years of the Milkyway Galaxy to track down the origin of humanoid life.
    This is a typical n-body problem that happen to have been "solved" two years before the show was aired. Coincidence! Yeah, right.
    How many stars do we need to simulate? Frankly, all because on average stars move at 100 kph or 2*10^6 Ly in 4 billion years.
    And we assume the Federation doesn't know much better how to solve it so we will assume a reasonable algorithm with
    O(N log(N)) per time frame with N being the stars.
    With each frame we get a set of pattern from the relics. We need to check the movement pattern each frame with all the patterns before. So we got a n amount of patterns to check against an unknown number of known patterns. With n being the number of frames. For simplicity lets say there are a million patterns to check against and the simulation is fine grained enough with a frame for each year. We also will double the n-body part to account for collisions and close encounters.
    2*(200*10^9 * log(200*10^9) *4*10^9) + 4*10^9*10^6
    =60065934470017577322517.621959 + 4*10^15
    =60065938470017577322517.621959
    =60.066*10^21 operations

    This is within Voyager's computer core's speed (575*10^21)solving it in 1/10 second. Too slow! :P

    Edit:
    Or maybe Crusher is incompetent and programmed it for
    O(N^2)
    in this case it would be 2^5*10^31 = 6.4*10^32 operations
    It could take a while depending on how much power is assigned for the task.
    User avatar
    Almael
    ocossium
    ocossium
     
    Posts: 141
    Joined: Mon 12. Jan 2015, 13:02



  • thnx for this scientific mathematic insight, almael. interessting stuff, but i have to honest that it didn't unterstand everything.
    User avatar
    deif
    silver
    silver
     
    Posts: 796
    Joined: Tue 10. Jun 2014, 20:30



  • I'm sorry you got confused by the numbers and all. It was unavoidable. If you don't understand anything just ask.

    What I did is part of a larger process called "sizing" in engineering i.e. aircraft sizing.
    First you do studies and analysis in order to see the kind of requirements that must be fulfilled for what you want. From there you draw up specifications or review & adjust existing specifications. Usually, you get some sort of formulas or graphs to help out later.

    Here, I did a short & "simple" version by drawing up
    -the relations between computer processing power and memory requirements
    -the relationship between amount of sensor signal data, processing power, and storage requirement; This is the most important part aside from the holodeck!

    Then I drew up the needs of the ship and the needs of the crew using the above findings and
    the information from ST:TNG TM. Sure a lot is based on assumptions and some are arbitrary but it's quite accurate compared to typical fandom.

    [hr]----------------------------------------[/hr]
    And well, I did hold back a bit on signal processing. And I ignored the different cases. For example all I did refers to radio telescopy which is limited to 0.000001-100 THz band. You have to imagine that for the other sensors the 100 Thz represents their own band which range from 1 PHz to infinity. And as far as real survey goes 2 million channels is nothing the SKA will have a 100 larger scanning size in the next decade(s). Sensors are also limited by their refresh time so they cannot scan faster than 100 Thz. Some can do faster like laser and such. So there is a bit of head space for scifi.
    I also ignored "double-sampling": As the previous signal voltage is still there when the next reading is performed, two reading must be performed and the difference in voltage is the actual signal. So I simply halved the requirements.

    I didn't go into holodeck simulation requirements because that's obviously more difficult. From a simple standpoint every atom needs to be simulated with all its dimensions. Then the ship could only simulate about a ton of mass. Things could be simplified by reducing the "resolution" to the size of an hair. Then a million tons wouldn't be a problem...maybe.

    This all is just for showing why I had to come up with faster than FTL speed interconnects for the Prestige project. And of course it's there's to give other fans an idea.
    The funny thing is as seen in the TNG episode "The Chase" the writer consider scanning a simple thing compared to running a simulation. But I bet 99.99% of the Federation's computer time is used purely for processing sensor data. Even if you don't use sensors to scan anything there are still the ships internal sensor and machinery sensors. The blackbox of an aircraft for example records 10000 streams of senors data.
    User avatar
    Almael
    ocossium
    ocossium
     
    Posts: 141
    Joined: Mon 12. Jan 2015, 13:02



  • Almael, you are a phenomenon to me, when I see what you have written down here, this is very detailed stuff, wow, very cool! :o
    User avatar
    Kmpr´rak
    deltagold
    deltagold
     
    Posts: 1563
    Joined: Tue 10. Jun 2014, 21:43
    Location: 09669



  • Thanks.
    I don't do this often, though. But I admit topics with math comes easy for me. On the other side topics without math usually bring the most insights on other things often even far reaching.
    User avatar
    Almael
    ocossium
    ocossium
     
    Posts: 141
    Joined: Mon 12. Jan 2015, 13:02

Next


Return to SCIFI GRAPHICS




Information
  • Who is online
  • Users browsing this forum: No registered users and 1 guest
Powered by phpBB® Forum Software © phpBB Group
© Imprint: David Kleist, In den Gärten 10, 3806 Bönigen (Schweiz) Tel. 004133 822 15 41 contact: nulcarsc@gmail.com
cron