How to Trade Binary Options Successfully

Code-signing and notarization for Mac

Hi everyone! Have any of you packaged Unreal projects for Mac distribution, either through or outside of the App Store? I have what I think is going to be a problem with a very simple solution, but I've reached the limits of my knowledge with this sort of thing. I'm an exhibit/experiential designer trying to build interactive 3D environments with Unreal Engine. Besides being more technically literate than most designers, I don't have very much development knowledge at all. Blueprints really make a lot possible that I would have never otherwise attempted!
My main development machine runs Windows, and that's gone off without a hitch.
For the Mac build, I've packaged my .app through UE and it's running great. The part that I can't seem to wrap my head around is code-signing and notarizing. I believe I've gone through the process correctly but I still get the Gatekeeper dialog box rejecting my app as from an "unidentified developer" when I run it on a different Mac or download it onto my development Mac and try to run it.
I'm using a "Developer ID Application" certificate installed to Keychain, and ran codesign with that certificate as shown below on every single binary and .dylib file in the package:
codesign --deep -f -v -s "Developer ID Application: My Name (IDCODE)" --entitlements "/entitlements.xml" --options runtime --all-architectures --timestamp "each-individual-file" 
I have then compressed the app into a DMG image and uploaded it for notarization like so:
xcrun altool --notarize-app -primary-bundle-id "com.thebundleID" --file "thearchive.dmg" --username "myappleid" --password "password" 
After many attempts I did eventually get this to return with a success. I then ran
xcrun stapler staple "thearchive.dmg" 
as well as tried to extract the app from the dmg and ran
xcrun stapler staple "theapp.app" 
and despite
spctl -vvv --assess --type exec "theapp.app" 
coming up "accepted" with a "Notarized Developer ID" matching my own, when I transfer the app to another computer it won't open easily, with the same "unidentified developer" message as if I hadn't signed the code at all.
Has anyone here gone through this process and found a way to make it work? Have I missed something? I'm happy to share a download link for you to try launching as well.
Thanks!
submitted by ancienttreestump to unrealengine [link] [comments]

Factorio Multi Assembler

Factorio Multi Assembler
What do you want this factory to produce? Yes.
Multi Assembler in current multiplayer session

tl;dr;

I wanted to tinker around with the microcontroller mod and i "hate" the pre robotics gameplay when it comes to non bulk recipes (laser turrets, production buildings, specialized ammo...), handcrafting is slow, automation is tedious - so i engineered an factory design to produce virtually any recipe dynamically.

Demo Video

The production queue can be seen on the right with Q being the number of recipes queued at the moment.
https://streamable.com/ygnvs0

How does it work?

This screenshot provides an overview of the mostly vanilla proof of concept, only the microcontroller mod and the recipe combinator mod are required here.
Subsystem Overview
Resource provider
Source of raw resources (Iron, Wood...)
Multi Assembler
Dynamic assemblers with one microcontroller and two recipe combinators each, one reading the assemblers status, the other one setting the recipe delivered by the microcontroller, which in turn gets the recipe from the "wanted recipes" red signal network connecting the different subsystems.
Multi Assembler Microcontroller Code explained
  • See linked factorio forum post
Possible improvements / features
  • Avoid the "180 tick do while" and react to events instead, eg. inserter read hand content
  • Invert the sorting logic, removing the "set 2000" part in the code and making the red assembler network semantically more logical "the higher the signal the more i want this recipe"
Quirks and remarks
  • An mostly vanilla build as shown in the PoC above is not feasible for larger quantities, but should be possible if combined with techniques like sushi belting and increasing the initial delay of the "do while". This is not covered in the demo map as i am using the warehouse mod to work around this.
Recipe Logic
Defines what recipes can be produced based on the given resources and the recipes configured in the "production targets" constant combinators.In essence this subsystem will emit a constant signal of "1" for each recipe which a.) should and b.) can be produced to the red multi assembler network.
At the moment this subsystem is rather basic and can be improved upon (see quirks and remarks).
Recipe Logic Microcontroller Code (TOP) explained
  • See linked factorio forum post
Possible improvements / features
  • Add configurable recipe priorities aka "I want laser turrets before walls, and belts before everything else"
  • Better recipe priorities based on recipe complexity / production targets, "I want 5 assemblers to produce cables needed in bulk for circuits, while i only want one assembler at max producing power armor"
  • Possible solution: Calculate the priority based on the distance to the production target. The higher the difference between production target and in stock items, the lower the signal to the red multi assembler network.
Quirks and remarks
  • If intermediate products go missing or cannot be produced (say you manually provide blue circuits, and remove them again after an recipe with blue circuits was added to the production queue), the recipe will be stuck indefinitely in the production queue. In order to solve this, simply reset the cache combinator of this subsystem.
  • Items with large stack sizes may lead to problems if the steel chest contains less than (number of assemblers * item stack size + 1). That's because the assemblers will "eat up" all the resources of the steel chest, which in turn leads to the system thinking no resources of this type are available, and thus aborting the production.
  • Slow raw resource input or intermediate recipe production will lead to an slow flipping binary state of "I can produce this higher tier recipe" and "I no longer have enough resources for this recipe", ultimately this is a resource input problem, but it could be handled in a more graceful way for other queued recipes.
  • Depending on the setup, production targets are not hit exactly because of an production target evaluation delay when checking if the recipe should still be produced, in some cases this leads to overproduction.
Production Target Constant Combinators
Add the recipes you want the Multi Assembler to produce here. The quantity defines the production target.
Missing Resource Indicator
Will flash red if any resources required to produce an recipe are missing in the steel chest of the multi assembler.The missing resources are shown as positive values in the combinator to the right of the flashing light.
Production Queue Visualizer
Optional component, simply visualizes the amount of the currently queued recipes.

Download & Blueprints

See my post at https://forums.factorio.com/viewtopic.php?f=8&t=85141
I am new to reddit and couldn't figure out an way to post them here without adding way to many lines to this post, maybe someone can enlighten me if there is some kind of "single line code" option?
PS: I am not a native speaker, if you need clarification on some parts feel free to ask.
submitted by heximal2A to factorio [link] [comments]

I've compiled a list of all the confirmed FACTS we know currently for all you guys.

- The Sim is highly customizable with lots of options to the end user such as graphics, performance etc.
- Sloped runways
- Cockpits and aircraft were build using 3D scanning equipment, which were then hand-corrected.
- Default aircraft are made using blueprints and performance data from the aircraft manufacturers themselves (you can’t get more real than this)
- Offline mode which doesn't affect the experience on a massive scale (very tiny details like farmhouses will be the only minute details missing)
- Hybrid mode (pre-download or pre-cached)
- Always online mode (all three modes have the exact same physics performance though)
- Aerodynamics (better than XPlane’s blade element theory, which divides the aircraft surface into 1000 little surfaces and the physics and aerodynamics are calculated for each surface individually)
- Rainbow created genuinely by atmospheric conditions
- Sids/Stars
- Augmented Bing Maps created "on the fly"
- Runway condition affecting tire friction (friction depends on wet, gravel, concrete etc)
- Big and small jets (not much details shared beyond this)
- Native Flight planning
- Cold and dark mode
- Interactive Checklists
- Click on map and start where you want directly in cruise (drag and drop aircraft position feature like in XPlane)
- They want to implement helis and fighter jets as default initially or after initial release
- Cloud view distance is around 600km
- Ground view distance is up to the horizon
- Autogen extrapolated from Bing satellite imagery where photogrammetry isn't available.
- Can change the time of day with a slider in sim (lights and weather change dynamically)
- No VR at launch (biggest wtf and heavy protests by the community at the moment)
- No helicopters at launch (second biggest wtf after no VR)
- 60 layers of cloud in real-time weather
- 20 layers of other weather(like windshear and jetsreams)
- DirectX 11 based
- No ray-tracing
- Presented to the previewers on September 30 at Seattle using ‘upper but not top-end computers’ equipped with a 2080i an d an inter net connection tested at 25 Mb.
- Multi-core support is confirmed
- Orange glow on clouds at night around cities due to light pollution.
- Satellite imagery depicted everywhere (up to 3 cm in certain cities where photogrammetry was performed)
- Aerial photoghraphy with much higher resolution on some cities performed.
- Photogeometry for 1000 or so environments, 460 cities, down to 3 cm.
- The graphics engine supports seasons on the fly, and is not currently implemented due to priorities, but will be prioritized based on what people are requesting the most.
- VR has been the top request, and seasons have come in second in the entire community.
- They are targeting low end machines but the graphics are scalable. They are confident that they can optimize for additional FPS and detail and is highly configurable.
- SDK will be released with the tech alpha or soon after.
- 3rd party content, including freeware. can be installed outside their marketplace. They are encouraging it--Incredible!
- They did not remove or lock down user modifications (such as text file configs related the airplanes), and i n fact are improving the ability to do so. What was closed, such as binaries are now open.
- Exposed more variables (allowing for more customizable options for developers and users)
- Dynamic rolling cache, so you don't duplicate downloads. (helps reduce scenery download size)
- Configurable bandwidth allocation.
- Free updates throughout the sim's lifetime.
- Plan to add helicopters and fighter jets in the future, depending on demand from users.
Edit: Multiplayer support is also confirmed (not shared cockpit, that is currently not on their schedule)
Edit: Animals (as seen in the original E3 video)
Edit: Turbulence inside clouds
Edit: Winds will affect sea waves height and direction
(This information is everything we know about the sim as of 30 September 2019 as per what was shared by the Dev team and the visitors from Seattle. Information courtesy of Avsim forums)
Edit 2: Thanks for the gold, silver and Platinum. Just doing my part to contribute to our passionate hobby of Flight simming. Blue Skies.
submitted by rwy27 to flightsim [link] [comments]

When Science Found God

I’ve never much cared for religion. I mean, it’s interesting and all, the old parables and philosophic insights from people two millenniums removed from the present. I particularly enjoy the books of the Apocrypha, and the Bible’s magnum opus of Revelation if for nothing else than interesting stories. Even some of the tenants like an emphasis on strong family bonds and moral stature I can resonate with, but in terms of a giant omnipresent entity that created everything yet loves us unconditionally watching our every move from unseen planes, yeah, I don’t know about that.
I still don’t ascribe to a singular religious doctrine, but knowing what I know now… well, let’s just say the title of atheist would be a little disingenuous. Staking my flag in that camp would contradict all my principals of the scientific method and firsthand observation. Try as I may, I cannot in good faith deny or refute what I myself witnessed. Calling whatever we discovered ‘god’ may prove a bit remedial or inaccurate, but there is no denying it, we found something.
Science has at times become this sort of monolithic and infallible institution. One that suffers from the ostracization of fringe concepts that fail to breach the egotistic blockade. It is all too often wielded as a trump card to negate all that doesn’t assimilate to the prevailing narrative. Too often outlandish claims are torn asunder because no metrics exist to properly digest them.
For all the good it has brought, science is still not and will never be an absolute. Nothing is. Absence of proof, is not proof of absence. And what happened out there, in that lab deep below the frozen streets of Stockholm now stands as a testament in my life, to all the ventures humanity has yet to embark upon. It serves as an anchor, and if ever I find myself drifting away into the blissful seas of cognitive dissonance, it is there to remind me how small and naïve I truly am.
I graduated from UCLA with a Bachelor’s in physics, and an incredible opportunity landed in my lap. One of my professors had put in a good word for me, and I was contacted by a lab out of Stockholm and offered an internship. They were apparently impressed with my thesis which delved into the topic of string theory and mathematic application to universal process. I of course accepted the offer without a moment’s hesitation.
From there I uprooted my Californian lifestyle to move halfway around the world to the frigid north of Sweden. I was not prepared for the cold. Most of my summers were spent in a bikini, frolicking on the sandy beaches of Santa Monica and lounging in the sun. Sweden might as well have been another planet. Temperatures would plummet to a bone-chilling negative 30 in the winter. Luckily for me though, I had a marvelous host family who helped me acclimate myself and integrate into Valhalla.
I was brought on to the team and slowly began the arduous process of melding into the group. They were all incredibly kind and welcoming, but still the feeling of being woefully outclassed by my colleagues was thick as tar pitch. The project consisted of over fifty men and women, all of them among the best the world had to offer. They hailed from Germany, Japan, Poland, Hong Kong, South Korea and many other sovereign states. It was a melting pot of some of the greatest minds the world had to offer. Seeing them in their element, and marveling at the way their minds hurdled asinine topics to delve straight to the cortex of reality was altogether incredible, and more than a little intimidating.
The expressed goal of the coalition was to study the behaviors of particles and the subatomic realm to further decode the complex world of theoretic energy matrices. By extension the group also allotted resources to develop tools for observing and decoding quantum entanglement hypothesis’ and the aforementioned string theory. These principles were still in their infancy at the time, and none of us could have ever imagined the enormous magnitude of the things that were to come.
The lab had its very own particle accelerator, which I myself pretty much obsessed over from Day 1. Most of the concrete data however, was relayed from the lab in Geneva, home of the large hadron collider. I even got to see the magnificent machine in person on a few occasions. One thing that has always staggered me, is the amount of incredible achievements capable when pursuit of knowledge guides the way. However, the complete polar opposite is also true, as curiosity without empathy all too often yields crimes against humanity.
As you may already know, the large hadron collider was the first machine capable of synthesizing the particle known as the Higgs-Boson. The machine is a particle accelerator built in a 27-kilometer loop. It uses a state of perpetual vacuum and temperature colder than that of outer space to accelerate particles to 99 percent the speed of light. The particles collide with one another, creating spectacular outbursts of radiation and results theorized to be similar to that of the big bang on a much smaller scale. It is also through this process that the infamous Higgs-Boson can be synthesized.
Some call it the ‘God Particle’, but many physicists are not fond of the omnipotent moniker. It is in a way suitable though, as it is ubiquitous and can spontaneously manifest or dematerialize through processes which are not yet entirely understood. It is a sort of bridge between matter and antimatter. The entity that binds the ethereal with the corporeal. It is the place between light and dark, hard to define, as once light ends shadow begins and vice versa. The exact moment of intersection is difficult to pinpoint, but there is a definitive moment, and that moment is the Higgs-Boson.
It was once thought that matter could only exist in one place at a time, however the particle slit test of our progenitors proved otherwise. A particle accelerator was used to eject protons between one of two microscopic slits. They naturally assumed the protons would pass through either slit A or slit B, and when directly observed their premise was corroborated.
However, when an imprint background was installed to bypass direct observation, they noticed a peculiar detail. The particles produced what is known as a wave, or interference pattern on the imprint like ripples in a pond. This meant that the particles were interfering with themselves while simultaneously passing through both and neither of the slits. It was at first thought to be a false-negative and outright impossibility, but thousands of repeated experiments all reached the same conclusion. There was denying it anymore. Matter can exist in more than one place at a time, and reality is altered simply by perceiving it.
The world of particle physics is a strange one, and one which we have only just begun to glimpse the majesty of. At times it may even require us to suspend our own limited human understanding of things, to contemplate things beyond our minds comprehension. It was this idea which was the tabernacle of all the group was trying to achieve. To unravel the mysteries of the subatomic universe, and better understand reality itself.
The group was funded exorbitantly, and state of the art equipment was provided from lavish donors from all around the world. My contemporaries and I began to study the processes again from square one. This consisted primarily of monitoring the nature of protons and testing the same process over and over ad nauseum. Progress was slow, and many failures and errors were soon under our belts, but you can’t build a house without chopping down a few trees.
It took years to decode part of the formula, but eventually we learned that the behavior of these particles could be predicted under certain pretenses. They could also; to a certain extent, be directed. Programmed to inhabit separate locations at the same time, giving them the perceived ability to exist in two places at once. In reality though, it was more akin to a transfer of locale via microscopic slits in the Higgs-Boson. We realized it was not a matter of travelling to, but instead travelling through. Through the fabric of space itself.
With electrical stimuli and coordinate based geo-synchronization, one could manipulate these particles to transfer locations faster than the blink of an eye. The machine used was primitive compared to later iterations, but it’s true potential was not lost on us for a moment.
Time went on, and the technique was further refined, most readily in the distance particles were able to be transposed. It started as only a few nanometers, but eventually we could transfer particles several feet.
It was through this process, that blueprints for an entirely new type of machine were first devised. It was to be a machine unlike any before it. Instead of electrical stimuli sent through circuits and wires, it was transferred directly from one location to another. Wireless energy transposed through space. This greatly improved computing capabilities and allowed the machine to act much quicker than anything ever seen before.
Initial ideals for the machine were skeptical at best, but as time went on the real significance of it’s potential became apparent. When combined with a suitable processor and digital interface, it soon began decoding encryption and translating mathematic cipher in a fraction of the time of anything seen before it. It didn’t stop there though.
With a binary convertor, it wasn’t long before human physiology itself was deciphered and converted into convenient little anagrams and simplistic formulas. This soon gave the machine the ability to replicate human tissue and organs from fetal stem cells. When given raw biomass it could manufacture a duplicate heart or a lung. One which was genetically indistinguishable from that of the donor’s DNA.
On one occasion, the machine even managed to regrow the arm of an amputee war veteran. Most of us thought it couldn’t possibly work, that the nerve endings on the man’s arms would be unable to be resuscitated after so long. But after seventeen hours in surgery, when I saw the vet move his new fingers for the first time after transplant and cell resuscitation, I knew we had discovered something special.
Disease and deformities were also unlocked, able to be observed on a molecular level and eradicated before gestation. A virus or bacterial strain could be genetically reprogrammed to attack and destroy itself rather than the host. HPV, AIDS, the black death, the common cold, strep throat, gonorrhea, none of them stood a snowball’s chance in hell against the unrivaled power of the machine.
It could even reprogram human DNA to desired proportions, eliminating extra chromosomes and restoring neural pathways to reverse entropic cognitive illness like Dementia and Parkinson’s. Even pre-birth conditions like cerebral palsy and microcephaly were in the process of being all but eradicated.
It wasn’t just organic material either. The machine could take a block of carbon and alter its isotopes to create carbon-14 and elicit radioactivity. This proved interesting for further power possibilities as the machine demonstrated potential of creating it’s own fuel source, but there was another more pertinent discovery. By rearranging the number of protons in the atomic nucleus, the given element’s atomic weight was altered, thereby turning it into another element altogether. The machine held the power to change the very building blocks of the universe itself. It could turn copper into gold, bromine into iodine.
I think it was then that we first realized the scope of what it was that we had created. The applications for the machine seemed endless. It could write books, clone living organisms and alter the very elements beneath our feet. It was the philosopher’s stone, the holy grail and the all-seeing eye in one convenient little package. The Deus ex Machina. The world’s very first quantum computer was born.
One important distinction I would like to make, despite prevailing rumors, the quantum computer was not in fact an AI. It had computing power which trumped almost everything else on earth a thousand times over, and the ability to perform almost any task given to it provided the necessary accommodations were implemented. For this reason, it was not allowed to make decisions for itself. Many of my colleagues were justifiably nervous at the prospect of an artificial intelligence somehow gaining sentience and going rampant with the power of quantum manipulation. We really had no idea where our experimentation would lead us, and so the decision was made early on, to prevent it from thinking on its own and going all Skynet on us. The computer was a beast of burden, happily doing any task given to it, but it was us that held the reins.
That was when the bureaucratic troubles first began. A lot of donors for the project, and even a few of my fellow team members had their own ideas on how to best utilize the machine. Every nation involved wanted it for themselves and had their own vision on how best to implement it’s capabilities.
Several members of the coalition ended up leaving the project or being outright dismissed, promising to return with a battalion of lawyers at their back. One man was even caught attempting to smuggle data from the lab, and detained to await prosecution. The reigning project overseer was also relieved of duty. In his place; Dr. Henryk Lundgren assumed the role of director of operations.
Dr. Lundgren is a dear friend, and a brilliant mind. That’s what makes his fate lie so heavily on my heart. It’s a tragedy what befell him, but I won’t act as though he wasn’t responsible for stoking the flames.
Lundgren managed to settle the group down and unite a divided faction of scientists who all held their own agendas. He made the executive decision to keep the computer in the hands of the international team and continue to study it for optimum replication and continued data analysis. All those who didn’t abide were dismissed or removed physically as the need arose.
Lundgren had toiled for years on development of the machines virtual capabilities, and decided it best to invest more heavily into it. It took months of development, but soon a fully-functional Sims-esque program was up and running. It was incredible. The simulation was modeled to be an exact carbon copy of our own world and held all the coordinating pieces within it. All the people, animals and nations. Augmented control apparatuses were then developed to allow us the ability to view the computer’s creation firsthand.
The simulation it created was so visceral, that none could even perceive that they were in a simulation at all. Test subjects were exposed to their own loved ones within the program and could not distinguish them from their real-life counterparts. I even took it for a spin a few times. I was hooked up to the monitor via a neural cortex interface, and had my mind rendered into the simulation.
I awoke to the sights of sunlight peeking through my blinds, and the sounds of cars outside. Around me on the walls were posters of Harry Potter, JoJo and the X-files among countless others. I recognized immediately where I was. It was my childhood home, an apartment complex in Sacramento. My parents were both there and acted in accordance to how they would behave in real life. My dad even made new corny jokes in a fashion that suited his personality. It wasn’t a memory though, it was an entirely new scenario, concocted by my mind and the quantum simulation.
My parents are both deceased in the real world and getting to spend time with them again was… indescribable. Even if they were just simulations, the experience was profoundly cathartic for me. I ended up leaving the simulation in tears, overwhelmed by the experience and the ability to speak with my parents once again. The event was so enlightening for me, it even made dealing with their absence a little easier. After all, I could now speak to them any time I wanted. I found myself never wanting to leave the matrix.
Dr. Lundgren subsequently questioned me about my experience, and I was all too happy to relay the things I had seen. He listened intently, with simple occasional nods and one-word responses. His grey face wore a smile, and cheeks dimpled in delight, but his eyes were far from the present, and worried.
We held a meeting with all staff members sometime after. Lundgren stood and paced in front of the group, silent and lost in thought. When he did finally speak, he held our undivided attention. He walked through all that our little group had managed to accomplish, and all the things we had learned on our journey. All the miracles unraveled and translated into digital coding, and all the advancements made. It was not a triumphant voice however, it was somber, as if none of it truly mattered. He then first proposed his theory.
Here we were, with an entire simulated universe at the tips of our fingers. A digital reality created and maintained by a machine we had built. A simulation which was so authentic, that none could tell it apart from reality itself. And if we had the power to create that, how did we know that our own universe was not the result of the same process? How did we know our reality was not in fact just a simulation?
An unnerving silence befell the rest of the group as Lundgren concluded his epiphany. All in attendance seemed to silently contemplate the idea, with a noticeably nervous aura now lingering. There wasn’t much said after that, but there didn’t need to be. We had an entirely new goal.
Upon returning for work the following day, I immediately noticed that several of our colleagues had abandoned the project without so much as a ‘goodbye’. Only 7 of us remained, among which was the prestigious Henryk Lundgren. He was changed though, his upbeat optimism and inquisitive attitude reverted to an impatient gibbering wreck of a man. He became hostile to prolonged questioning, and I could see the idea gnaw on his mind as he walked the tightrope between madness and genius. At times he appeared on the verge of catatonic psychosis. He would ramble and talk to himself, and pretty much stopped leaving the laboratory altogether.
We set our sights on a new task; to dismantle and test the hypothesis of Lundgren. To develop an ability to break through the boundaries of our suspected simulation and pier beyond our own reality to glimpse whatever may lie on the other side. Nothing else seemed to matter anymore by that point.
Life may be accidental, consciousness too, hell even complex organisms like human beings the result of genetic evolution and a bit of luck. However, simulation is not accidental. It requires an immense amount of dedication, programming and logistics, not to mention, power and maintenance. The ability to synthesize digital worlds is not something learned or accomplished by accident. It takes time, resources and brainpower to even attempt it, and even then, it’s no guarantee. The one concept that was off the table immediately, was that the theorized simulation was the result of natural phenomenon or random cosmic alignment. If Lundgren’s hypothesis was correct, and our universe was indeed an illusion, then someone or something had to be pulling the strings behind the veil.
Powerful as the quantum computer was, even it did not have the ability to glimpse directly into higher dimensions. As stated before, it took commands only from us, and could only perform tasks which we could coherently articulate to it. We realized rather early that directly viewing outside the boundaries of the universe was likely not possible. The only option was to send a message.
Through remedial experimentation and dozens of ponderous sleepless nights, we finally had a breakthrough. Our reality is based on laws. Laws of motion, laws of attraction, laws of physics. These laws cannot be broken accidently, but with quantum technology, they can be manipulated. Many believe that intelligent extra-terrestrials were first alerted to humanity when the atomic bombs fell on Hiroshima and Nagasaki. Ours was essentially the same idea. Demonstrating that we had the capability to toil with the quantum world in hopes of eliciting a response from a higher being. If we could ‘break’ or ‘bend’ one of these laws of reality, then perhaps the supposed orchestrator would be compelled to respond.
One of the earlier discoveries we had made was that of the concept of reverse time. Time is a measurement of something that occurs, and without anything to observe, time is meaningless. The concept only makes sense when in the presence of matter. The two concepts of space and time are coterminous, like light and dark or hot and cold, one does not exist without the other. Where there is space there is time, and where there is time there must be space. The opposite of matter is not nothing, but anti-matter. A true nothingness or void of anything substantial does not exist. It cannot exist based upon the nature of existence itself. Anti-matter is the invisible material which operates unseen and fills all the gaps which matter does not. All of it held together by the Higgs-Boson.
If an opposite of matter exists, then an opposite of time must as well. Every action has an equal and opposite reaction, and all reactions must remain proportional to force exerted. By utilizing the quantum computer, we had the ability to send protons back in time, sort of. We could make them exist where they once had not before they existed there, by using dark energy matrices and particle superpositioning to make them exist in two places at once.
The discovery had actually been made some time earlier, but never officially tested. It was restricted and marked as unbroachable, as many of our patrons were rightfully concerned by the prospect of unintentionally altering the past. Doing so could create a butterfly effect and wreak havoc upon the present. We were told vehemently that the reverse-time experimentation was forbidden, but now we had a legitimate reason to take interest.
It took some convincing on our end, but eventually we were successful when we promised to unveil the greatest discovery yet. The parameters were set within the computer and the lab was prepped for the operation. A single seed of dianthus caryophyllus was placed in a transparent reinforced container in the center of the room. The specimen was placed on damp resin paper, and several little green tendrils had sprouted from its shell.
The idea, was to reverse the symbiotic metabolism of the test subject and cause it to rapidly revert to a zygote state. The seed would be directed to perform it’s life cycle backwards, thereby contradicting the natural forward flow of life and time.
The parameters were finished, and Lundgren stood by the machine. He glanced to each of us individually a sullen demeanor and nervous twinkle in his eye. He looked to me last, and I nodded. Lundgren took a deep breath, adjusted his glasses and flipped the switch.
Immediately the tendrils within the seed began to retract. They disappeared within the shell soon after, and the seed shrunk until the point in which it was no longer visible. The computer alerted us that the task had been completed, and silence descended upon the crew.
We stayed that way for several seconds until a commotion from the computer drew our attention. An array of flickering lights and sirens began to wail like banshees, indicating an error of some sort. Suddenly, the seed reappeared and began to grow at an impossible rate. A mass of wriggling green tendrils erupted from the shell and pressed firmly against the case within seconds. It swelled within and the chamber violently ruptured a moment later sending shards of glass catapulting throughout the room. I managed to duck away just in time, but others in the group were not so lucky.
One man; Reginald Diabek was struck with a shard in the neck. The piece cut a gash across his throat, causing a thick crimson to spill forth from his gullet. He collapsed to the ground, as others began to rush to his aid. Before we could reach him, the engorged serpentine appendages of the seed ensnared him, slithering around his neck and abdomen. Diabek gurgled and terror filled his eyes as the green pythonic roots began to constrict him.
I watched, at a loss for words as Diabek’s wound sealed. His grey hair turned to a dark brown. The wrinkles on his forehead and bags below his eyes dissolved into his skin in a matter of seconds. The blackheads and liver-spots on his cheeks soon followed suit. All of us watched, stupefied as the process continued onward and Diabek appeared to age backwards.
Diabek had to have been nearly sixty years old, but in a matter of moments he appeared as though he was a young man in his early thirties. He then went young adult, then juvenile, then teenager. Diabek screamed in terror as his voice cracked from a gruff, raspy tone to a high-pitched pre-pubescent shriek. His body shrunk in his clothes and his extremities retracted within his coat. By the time we had reached him, he was gone.
We didn’t have time to gawk, as our stupor was interrupted by the computer blaring a warning siren, and a flickering plethora of lights designated an external problem of some sort. The display was a failsafe designed to protect the computer from malicious outside sources. Most of us thought the firewalls of the quantum computer were enough to prevent any attempted breach, but apparently, we were wrong.
One of my colleagues scrambled to the kill switch. He was poised to throw it, when he was halted by a sudden shout from Lundgren. Lundgren stood, eyes wide as dinner plates and mouth agape as he stared at the main monitor of the computer. The warning display had ceased, and only a single screen remained active. Upon it was displayed a single loading bar, with approximately twenty percent of it being filled in. This indicated only one thing, something was being downloaded.
We immediately surmised that it must be a virus or other malware of some sort. A prospect once though impossible based on the security measures of the computer, and yet the download persevered. All attempts made to restrict the download and halt it’s progress proved futile.
We exchanged nervous glances with one another, torn on whether to pull the plug and save our creation from hostile insurgence or allow it to continue to whatever ends. The call was eventually made by the investors outside the room, who had since been notified of the development. They demanded power be cut, and the machine be saved. The computer represented a colossal investment, and the costs to repair or replace it if any damage were to ensue was not something taken lightly.
Begrudgingly, Lundgren followed orders and commanded shutdown protocol. It was done straight away, but the machine did not power down. It continued, impossibly, and without a direct power source sustaining it.
Panic began to erupt from the lab, and power to the entire facility was ordered to be cut from the mainframe. It was done within seconds, and the room fell into darkness. The only light that remained was that of the main monitor as the download reached the halfway mark. The computer groaned and whirred under enormous duress as hundreds of fans shot to life to attempt to cool the leviathan machine.
We stood back, unable to make heads or tails of the development. There was simply no possible way the machine should've remained active, and yet it was. It continued to fill up the progress bar, powered by the fuel of some unknown outside source. With no other viable solutions at hand barring physical destruction of the computer itself, we could do nothing but await the culmination.
The download finished several minutes later, and the room fell into pitch black. We deliberated for a moment, before deciding our only recourse was to power up the computer once again. The mysterious file weighed in at an impressive 100,000 terabytes, enough to fill hundreds of normal hard drives, but just another drop in the ocean for the quantum computer. Once full mobility was achieved, a single never before seen prompt filled the screen.
"Unknown file type. Do you wish to execute the file?" All attempts made to bypass the prompt failed. We quickly used a separate program on another screen to trace the file’s origin, but to no avail.
Now, there is no hiding from a quantum computer behind a proxy or VPN. It uses algorithm-based process combined with ping response speed to determine probable origin up to an accuracy of 99.999%. We’re talking response time measured in millionths of a second, but for a quantum computer, it’s as simple as the ABC’s. Sure, it gets it wrong once in every million attempts, but the point is: it always has a guess. This time however, we received a new message.
“Unable to determine file origin.” Lundgren took a step back and pondered the situation and wiped the beads of glistening sweat from his brow. With nothing else at our disposal, he realized there was only one option left. And so, he gave one last command.
“Open it.”
The computer began to render the file, the process taking several minutes to complete. It was entirely in binary code, and eventually translated to a single message. Upon completion, two words in white font sat silently amidst a black background.
I never thought two simple words could have such lasting effects on my psyche. Those two words that have made me question everything I thought I ever knew. The computer fizzled out moments later and shut down. All of us just kind of left after that.
I returned home, overwhelmed by the events and left with a mystic sense of terror instilled deep in my stomach. The following morning, I was called by one of the investors. He informed me, that someone had broken into the lab late the previous night and sabotaged the operation. The lab was lit ablaze and soon reduced to a smoldering pile of ash, and the quantum computer was damaged beyond repair. Whoever had done it, possessed a security card and seemed to know the exact process required to dismantle the automatic sprinkler system.
Police held a single suspect in custody. A man who appeared as a neurotic mess in the center of a maniacal nervous breakdown. He was tried and convicted some time later and declared clinically insane. He was ordained to a mental health facility in northern Sweden, and it is there that he remains to this day. That man’s name? Henryk Lundgren.
I’ve never been able to properly assess just what it was that happened that day. The event has left me shaken and confused in more ways than I could possibly list. I don’t suppose I’ll ever be whole again, I just can’t be.
I know the truth, the reason for our meager existence. We had reached out far beyond, and something had answered our call. Whether or not it was truly what we would call ‘god’, I cannot possibly say. But I will say, after what I saw happen to Diabek, and what became of Lundgren, I can’t think of a better word for it. I think god is something we never could’ve imagined. It holds us all within the palm of it’s hand, and with a simple flick of the wrist, we would cease to be. There is no love, there is no salvation, there is only that which lies beyond the margins of reality. That which we have no possible hope in understanding, let alone combating.
One thing is also certain; it is watching us, and it does not want us meddling in that which we have no business seeing. We are set amidst an ocean of infinite black seas, and it was not meant for us to travel far. That final message could not have been clearer, and anytime I find myself drifting, I remember those two simple words relayed by the quantum computer in it’s last moments of life.
“TURN BACK.”
submitted by zachariusfrost to nosleep [link] [comments]

The Problem with Anthem: Part 3

In part 1 of this essay, I outlined 7 game design principles against which I believe games should be measured.
In part 2, I explored Anthem's adherence to these principles and highlighted its successes and failures
In this final section, I put forward a suite of suggestions to address the failures highlighted in part 2, keeping in mind the principles put forward in part 1.

"Resurrecting this turkey"

If you read part 2, you'll discover a long litany of flaws. What on earth can be done to fix this?

To a certain extent, it depends on the amount of effort you want expend upon the title. Most of the issues are not surface-level defects they're core design decisions which are exceptionally detrimental to game-play and require significant effort to correct.

Still, on the presumption you want to correct as much as possible, here's a way forward. And bear in mind, this is a list which focuses on dealing with Anthem's deep flaws. There's no way these could all be corrected, it'd be overkill. However highlighting these flaws and suggesting corrective action can be useful in pointing the way forward. For future games.


Technical
The first issue is what appears to be a lack of resource streaming. Anthem's loading times are insane. Given an NVMe SSD can effectively stream 3.5GIGABYTES per second into ram, you could - even if you need to pull resources from multiple places - load data into 16GB of RAM in under 10 seconds. While there's no doubt much of that data will need to be processed, swizzled and downloaded into the graphics card, there is absolutely no justification for Anthem's appalling loading times. Something is wrong here, whether it be the I/O routines or the resource management system. Put simply, this pipeline is not functioning well. It would make a lot of sense to optimize every single aspect of it until it's working properly.

Second, create a resource management system which allows pre-preemptive asset loading and prioritization. Texture management might consider optimizing for visible textures using a "light-cone" style approach where the resource management system uses a visibility solution and knowledge of the player's maximum traversal speed to calculate how far away "in seconds" each texture or texture group is and preemptively loads and unloads them based on need. (Provided you have some kind of reasonable hierarchical scene graph in place and can quickly perform coarse visibility determination.)

This is the primary technical challenge inherent in creating an open world, so it's mystifying why the development team apparently chose to skip this. Open worlds live and die on their real-time resource management systems. If you can't stream assets dynamically, you just don't have an open world.

Going back to our loop cascade, let's address the failures to adhere to the principles in each loop:

The Traversal Loop
The traversal loop fails on the "choice", "challenge" and "reward" principles. This is because the world architecture is simplistic and the jet-packs devoid of any meaningful restrictions. Introducing challenge into the traversal loop requires a more densely complex world with vastly reduced capabilities for the jet-packs (at least at first). Players need more complex ways of interacting with the environment beyond gazing at it.

What's the reward for keeping your jet-pack cool? You get to keep flying.
What's the penalty for failing to do so? You crash to the ground and have to wait.

Instead of a range of outcomes, you have two. A binary outcome, as it were. No mapping of multiple skill levels to differing outcomes and no real reward or penalty. Traversal carries no risk, contains no reward. It's a milquetoast parody of real game traversal.

As a result, the player is a spectator to the world, not an active participant in it.

You really want to get an idea of how bad this is? Look up some Youtube videos on "Sekiro: Shadows Die Twice". Look at the traversal, how it enables exploration, how it sets up stealth attacks, how it gives the player options when deciding how to navigate through a scenario. Yes, it's a different style of game, but that's not the point. Anthem has none of this, which is why traversal is boring.


So, this is the kind of thing you'd need to do to beef up that traversal loop.

A) The return of fall damage.
No risk, no reward. Fall damage brings risk to the proceedings. Of course this is meaningless without rearchitecting the damage system in general. There are a number of ways this could be approached, but this is another topic addressed further down.

Fall damage allows the world to become more dangerous and provides the players with incentive to look for safe pathways through the world.
Of course to make this meaningful you need...

B) Risk/reward based traversal.
The main problem with Anthem's world is that it's dead. Dead in the sense that it's a picturesque painting which reduces the player to the role of spectator. This elimination of the player's agency is surprisingly consistent. The world has no institutional memory and as a consequence, the player has no lasting impact upon it. The player's jet=pack ruffles the water, but for all his efforts, the world is indifferent to his traversal abilities, his firepower and - most of all - his intent.

Traversal should provide opportunities to explore. To pursue reward while risking much. Dark Souls epitomizes the tension between risk and reward and Anthem would do well to add some of its own. Achieving this without rearchitecting the majority of the world would be practically impossible but Anthem is a crystal clear example of the need for your traversal loop to contain challenge and reward.

Anthem has none and as a consequence has managed to make flying Iron-man style suits boring. Chew over that for a bit.

Classic risk/reward schemes involve the player exploring for rewards and having to take risks in order to chase after the really big ones.

The Jet-pack needs to be significantly nerfed and the player needs to be given the opportunity to cling to the environment and plan their next move. An environment which banishes most of the wide open spaces except for vistas which open up when you strive to reach the high points of the map. High points which require risk and reward the player with stunning views and cool loot. Yes - remember earning loot through exploration and skill-based effort? That.

All of this requires the world to become a lot more dense. Those wide open spaces are supposed to be vistas, not empty areas you traverse by holding down a button. They'd have to go.

C) Choice
In terms of traversal, Anthem willfully deprives the player of options. Get a navigation market and blast toward it at maximum speed. Even worse, since Anthem is a cooperative shooter which is absolutely obsessed with tethering players to each other, traversal occurs at the speed of the fastest and most impatient player in the group. Those who might want to appreciate the beauty of the world or try something out are unable to do so because Anthem drags them along to the next objective regardless of their wishes. This is yet another in the long list of bewildering design decisions which reflect a complete unfamiliarity with the essentials of good game-play.

Options go hand in hand with risk/reward based traversal, but providing multiple routes to a goal allows the player the opportunity to tailor their approach. This feeds into the scenario loop where the player evaluates the challenge before them and decides how they'll approach it.

Unfortunately, Anthem has no scenario loop, so choosing a route to a target (high/low/underwater) is irrelevant. You land. Shoot. Dodge. Hide behind the environment. How you got there is irrelevant. This is because Anthem doesn't want to be anything other than a looter-shooter, so the option for stealth or tactics is completely absent. Shoot the thing. Trigger combos when you can. Rinse. Repeat. It's about as close to pulling a slot-machine handle as it's possible for a 3d game to get - the only difference is that slot machines give you gratification much more quickly.

Anthem needs to stop forcing players together. The benefit is questionable and casual matchmaking really is a crap shoot. Sure you can lock other players out of your session, but this isn't the default and the player is penalized for doing so (with lower xp).


The Combat Loop
Traversal plays almost no role in combat, so combat is pretty boring. The limitless possibility of the Javelin suit often needs to be artificially restricted (with no fly zones) as the designers realize their mistake and try and bring the player back down to earth.

Combat is run and gun with a limited suite of options. There's no opportunity to herd enemies and effectively utilize area-of-effect, no way for players to distinguish themselves with smart play, it's mostly just combo-triggering and a war of attrition between your gun's numbers and the shield/health numbers of the enemies.

Titans are cheesy as hell. Not only can they fling homing fireballs at you, they can materialize them on top of you. This makes Titans tedious to kill, rather than challenging and entertaining.

The environment is practically irrelevant to the combat. It acts as an obstacle and shield, but provides no other possible interactions.

A) Damage - combat and otherwise must persist.
Without persistent damage, the Javelin is a monster which only fails when temporarily overwhelmed. This partitions each combat encounter into a separate event with no lasting implications and the Javelin is essentially immortal outside combat. Consider the possibilities when persistent damage requires the player to reach specific zones and may require resources to repair. All of a sudden, the world of Anthem becomes more dangerous and has far greater potential for risk/reward scenarios to play out.
Consider also a scenario in which the player fights to the top of a mountain through a succession of difficult encounters with damage persistence a factor. Consider further the possibility that the player can lose the valuable items he's carrying if he can't get them back to the fort or to a storehouse.

This would help Anthem with its lack of risk and reward.

B) Bring tactics into combat.
Doing this requires the players to have a more varied suite of abilities. Allowing players to consider tradeoffs and develop a Javelin to suit their own personal style. Shoot, melee, combo setup and combo trigger are not an inspiring suite of options.

C) Bring the environment into combat
Part of the problem here is that environmental interaction is minimal. Given the opportunity to manipulate the environment, the suite of available tactical options available to players would be expanded, thus increasing their ability to use the environment tactically.

EG: Diverting water, tipping rocks, creating pits or utilising the wind.

D) Create unique, interesting and challenging enemies.
Anthem's enemies are boring and vary between irrelevant fodder and cheesy bosses. The giant spider is the most interesting enemy to fight and this was in the demo. That this represents the high point of the game rather than an indicator of the game's quality is a savage indictment of the combat encounter design.


E) Allow the player to employ high-risk/high-reward strategies.
One of the key aspects of Dark Souls style games is that the reward justifies the risk. Boss fights results in considerable rewards and the fight itself is often an exercise in choosing between small, safe incremental damage and high-risk/high reward strategies which offer the lure of closing the right out quickly.

Balancing risk vs reward is another aspect of player choice - and thus personalization. Anthem's narrow range of combat expression limits the possibility for such strategies, but redesigning the enemies and opening up the player's capabilities would enable this kind of tactical choice on a moment by moment basis.

EG: Do I try a risky, high-damaging move and shut an enemy down before he can trigger reinforcements or do I find a good defensive position and chip away at health until everyone - including reinforcements - are dead? (Note that this kind of consideration is not an option in Anthem).

F) Increase the player's range of expression in combat
One thing about Diablo 3 - the player has a plethora of options in terms of how he'll build his character and optimize the use of high-level loot to cope with the challenges of significantly tougher encounters.

Anthem needs to allow the player to do more than shoot and trigger combos. For example - and really just off the top of my head - consider the following possibilities:

- Slow time/stasis
- Area effect
- Mind control
- Cloak
- Stealth/Backstab
- Environmental destruction
- Artillery strike
- Decoys

What's important to realize is that these options must be exercised against challenging enemies. Anthem has too much useless fodder whose only purpose is to die and drop armor and ammunition. (Speaking of which - the ammunition inventory mechanic is the absolute pits.)


The Resource Loop
The fundamental idea behind the resource loop is to allow the player to accumulate a kind of virtual currency which can then be traded for expanded capabilities, thus allowing the player to customise the game in a way which appeals to them most. This is often experience points, levels, praxis points or some other accumulation. This allows the player to exercise choice over the medium to long term and customise the game to suit his predilections and skill-set.

To do so, the player needs a tech tree. And consider the other possibilities:
- Discovering an ancient blueprint and going on a quest to retrieve the other blueprints and to find the necessary items to build a new Javelin platform.
- Discovering new technologies which can be used to develop a whole new class of abilities
- Gaining rare resources which can only be found through skilled exploration of the landscape



The Loot Loop
This is the most objectionable and least fun aspect of the whole exercise. Destiny and Anthem want to draw the player into an operant conditioning (gambling) loop where pretty colors addict players into repeating a joyless grind as often as humanly possible.

It's a cynical exercise to begin with, but if you want to actually make this work, you need to first expand the capabilities of your mechs, throw in a tech tree and then provide a wide range of possible buffs which extend far beyond the classic "more shields/more armobigger guns" paradigm. Anthem's loot sucks because there's not many ways it can buff the mechs, not because the drop rates are rubbish. (Oh, and is there a screen somewhere which shows the accumulated results of all your buffs? Because if there is I can't find the damn thing.)



Conclusion
To wrap this up: I had high hopes for Anthem and was incredibly disappointed by the result. And this is not an isolated example. I really am increasingly bewildered by an increasing lack of game design chops in some AAA studios. Some people are doing it well, but a fair amount are doing it pretty badly. I don't know what happened with this title, but it feels like Bioware lacks anyone who really understands game-play. A significant correction is needed and the importance of challenge, reward and multi-axial player choice really does need to be reiterated as these founding principles really do seem to have become lost along the way.

So if there's three things which I hope this essay is pounding into some people's heads, it's this: Choice! Challenge! Reward! These are the essentials people. (And it's why "Gone Home", "Dear Esther" and "What remains of Edith Finch" are not games.)

If Bioware implements even half of what I've outlined here, they've got half a chance of resurrecting this turkey. If they keep tinkering with drop rates and promising minuscule content drops every 3 months, then stick a fork in it - Anthem is done.


TL;DR - Anthem is boring. Hey maybe make it fun?

submitted by PJ_Heywood to AnthemTheGame [link] [comments]

I have been working in the engine since 4.0.0; here are some things I learned.

Hello /unrealengine,
My Name is Nicholas and I am freelance programmer that works inside the unreal engine. I have been working inside the engine since 4.0.0( also known as ages ago). Since then I have picked up a lot of information that make my job, and hobby projects easier. So I figured I would dump some of said information in this post. I see a lot of newer users asking where to get started and questions that they feel are too easy, or feel bad to ask. So hopefully this brain dump helps some newer users, or more advance developers pick up some information. If anyone has any questions that they want super specific answers for; you can PM here or on my website!
Most of this is going to be programming. tldr; this is going to be long...
Programming Tips
I really want to Learn c++, but don't know where to start? The best place to start is actually with the blueprints. The reason I say this is because a lot of the methods and functionality that the blueprints have are similar to the c++ standard that the engine uses. So if you know a blueprint method, and what it does you can easily find it in c++. Once you know some basic methods and events; like begin play, Tick, and end play; and what they do. It will be time to dive into the c++ end of the engine. While I DO NOT recommend using the engine to learn how to program c++(since the unreal engine is very marco heavy); it is great for someone that has spent some time in the language. If you are starting out with no experience in c++; I recommend downloading Visual Studio, or Xcode, or other IDE(Integrated Development Environment) that works with the engine and make a couple of sample code projects and follow some tutorials. There are a ton of resources on that end; when you got that under your belt its time to dive into the engine. Once you start in the c++ part of the engine learn to copy simple blueprints you made in the past. How to print to the console, spawn actors, communicate with the game mode, change values and data, and call methods. This will give you a lot of practice in the core concepts of game programming. In the long term it will benefit you since you can take your previous knowledge base of blueprints scripting and method calling and apply it to your new skill c++ programming. If you need a book or guide; I recommend this personally.
How do I keep my code base organized and readable if I want to scale? This is a pretty straight forward answer; documentation and standardization. Document your code so that other programmers, scripters, or artists can understand what is going on in your code. In professional development some programmers use a "dead mans switch" in their code. Which is the concept of documenting their code so that it is not only readable, but so that another programmer can pick up the code and work in it if something happens to the original programmer. Secondly to that I recommend that you use a standard naming convention; Allar has a great guide here. It is pretty useful to use this early on so that legacy code in your game can be updated and understood easily as you move forward with your project.
Something went wrong when I tried to compile my code what do I do? The first part of this is to relax, breath, errors happen and if your stressed coming up with the solution to the error will take longer. Secondly, do not look at the error list that visual studio or other IDEs produce, instead look at the output tab. That tab is where the actual errors come out at. It will make your life a lot easier if you know that you need to look there and not the error listing, since unreal sometimes throws false errors. Thirdly; take your error and try to struggle through it to fix it if you're a more advanced programmer(struggling makes you a better programmer in the long term). If you can not find a solution take your question to google and see if you can find a solution that way. Since half of development is solving errors and issues(part of the fun to be honest); it would only benefit you to do some research on the error. Ask the right questions to find the answer to your error; this will help you be self sufficient. If for some reason you can not find a solution then stop by here or the unreal engine forums. We are always willing to help; but it only benefits you if you try to work though the problem first.
Is source control worth it? Yes, it is so worth it. I worked on a project for 6 months and then my HDD at the time crashed and fried. I only got my work back because I had it linked to a source control. If you are serious about releasing your project to the public; look into source control right away.
What Types of Source Control do you recommend? This is not a plug at all, but just wanted to give my thoughts and experience on this topic. I recommend Perforce as my top choice as a source control option. Most professional studios use it; I prefer it since it locks files by default and does not give a chance to overwrite another persons work. Then I recommend Tortious SVN; it is a pretty simple but powerful source control option. A few studios I have worked with used it; it did require some coordination between programmers to make sure nothing would be overwritten. Lastly, I recommend the GitHub option or the BitBucket option. They are both free and easy to use. However, the options can expose your project to the public unless you pay for their service. All of these are paid options. If you are a solo dev; just back it up to a hard disk if you do not want to pay.
I want my team to grow, but I do not know what to commit to my source control? You really want to commit the following folders to your repository: The UProject of your game, your source folder, your content folder, your config and binaries folders. Once all of that is there; you just push anything that has been updated in those directories.
Does everyone need to compile the binaries every time new code is committed? The Simple answer is no. Epic released a tool for Visual Studio called UnrealVS. It allows the programmers to compile the binaries and allows the artists to not have to compile the binaries on their end. The process is pretty easy; just compile your code once to set up the direct link libraries and then run the Unreal VS tool. It will compile it again, but save your artists or not C++ programmers time and energy on their end. Here is the tool documentation.
Why would I ever want to expose code or variables to blueprints? The Short answer is flexibility. It allows for you as the programmer to change values without needed to dive into the code and compile each time to change a variable. It also allows no programmers to change and tweak values without messing up your code. Since designers can easily change the values without needing to send you the programmer another task.
The Longer answer is to protect the code and set up a pipeline for your code. The reason is it allows for no programmers to modify values without needing to open your code to do it on their end. It also prevents them from making a mistake and breaking potentially critical code. It allow you to set up a pipeline and expectation of what people can change and what they can not change. If they want to change or tweak non-exposed values and variables, they will need to contact you and have you expose it. If it is potentially critical you can put checks in the code to make sure the value is in bounds and will not break anything.
What type of Methods should I expose to blueprints? Well you should expose anything that might need to be called in child classes or blueprint implementations of that c++ class. Again it makes sense to set up a pipeline for passing information from the blueprint layer to the c++ layer. You want to make sure anything passed in can be handled correctly. For example if you need to pass in a pointer to a method you need to make sure the c++ layer is checking that the pointer is not null and is valid at the low level.
I Hope this helped someone; if you are a developer or have any tips. Please drop them in this thread to help newer developers, programmers, and artists get into the engine. Unreal is not a small engine and takes some time to get into.
Hope this helps, Nicholas --d0x
submitted by Parad0x_ to unrealengine [link] [comments]

New Atlantia: The ruins of Greenway "concept pitch" 01.01.03.1

name of the game:

-

New Atlantia: The ruins of Greenway

---

project pages:

-

https://docs.google.com/document/d/13PHPZeRcitKKL6JtJd1Aod5JtPcPNMHfHqcG_4jYQQs/edit?usp=sharing

---

some descriptive terms:

-

an open source cross-platform title (also works on mobile), openGL powered and created in conjunction to blender

uses Godot Engine and is fully moddable through use of mod.io API (softwares)

makes heavy use of procedural generation and uses a random HEX seed that you can enter manually to set the generation environment up

persistent sandbox game with fully destructible environment including ablation of the ground and hill/mountain-sides through use of explosives, lasers, and drills/diggers

game makes use of other open source projects such as chromium for the integrated web-browser and tox for chat and voice. The game automatically starts an instance of I2P and plays exclusively over the darknet, though exit nodes should be available and darknet can be disabled for censorship related purposes, but games/server instances needs to have exit node enabled to allow censored players to join the I2P network through the exit node

the only advertising is the official greenway splash in teh beginning of the game when you start it up that explains a bit about the game and asks you to please join operation greenway and join the effort to create a green and good greenway

the game is meant to be played online, but there is a single player offline option as well

---

licensing:

-

open source honestyware
https://defuse.ca/honestyware.htm

---

funding:

-

the game will be funded by a 30 second timed monero miner that deposits the crypto into the operation greenway "trustfund". donations in other currency will be accepted and then converted directly into monero deposited into the secured fund for this project. no funds from teh project may be diverted into other government projects, but other relevant government projects may deposit funds into this one if their funding structure supports the transfer of funds to relevant projects. GreenSoft will receive