AnVRopomotron 1.1.2 Update Notes

October 7th, 2020

A nice autumn update is here with a new burial to examine and miscellaneous features! Check it out here or read on.

  • Something changed with A-Frame from 0.9x to 1.0.4 where now if I rotate a model, its collision box will now move along with it. This allows me to rotate the credits button behind the Centerpiece and make it just be an orb sticking out of the wall instead of the full i-shape.
  • A new burial for the Burial Chamber. Thanks to Global Digital Heritage I can present a burial found in a Spanish castle. Unlike Gabriel Archer, the exact biographical details are sparse but there are a lot of cool features with the grave and the site. They also scanned the castle so that is also in the room as a small model.

calatrava castle

  • Burial Chamber fixtures updated to accommodate the new burial. Now the room will start out bare and you can cycle through the burials with a button at the back of the room. Javascript flipper based on the credits and the panels that show up on touch were repurposed here to selectively show and hide different parts of the room. Fun A-Frame fact: if you set an entity to be invisible, its hit box is still there and can be interacted with. This is useful in a lot of cases but not here as there would be invisible buttons from the unselected burial. My solution is to move the unselected burial and buttons three meters underground and out of the way. When a burial is selected it is both made visible and moved up into the room. Why make it invisible too? To reduce rendering time! The burial lifting and lowering function was also rewritten to accomodate different buttons having the same ability. Touching any lift button now moves both the visible and hidden burials. The rewriting also allows for different elevations for different models.
  • The previously Burial Chamber blank wall now has a map that shows the location of the burial and a changing text box with the name and site.


  • Occulsion logic improved. One change is to lighting when entering and exiting the Burial Chamber. This is very subtle now but may be important for future expansions. The other change is to keep grabbed objects visible in VR if the user takes it away from the Grab Lab. It was supposed to do this already but something broke or it never worked in the first place.
  • Removed normal maps for burial models and changed shader to be flat (unaffected by light sources). This beat endlessly tweaking the lights to get the desired result per burial.
  • Grabbable objects phased through the floor if you dropped them. The navmesh that the user abides by is now also a static object for objects to land on. Also:
  • Anti-Drop Protection for grabbable objects. The site will check every few seconds using a tick function if any object has moved below a certain height. If so, it resets the position back to the center of its respective table. If the object has been moved a few meters away, the object is left on the ground since it’d be more convenient to just manually pick it up instead of walking back to the table to get it again.

  • Your VR hands are now a blue that is complimentary to interaction orange.

AnVRopomotron 1.1.1 Update Notes

August 31st, 2020

See it here or read on.

Scale Model Hall and VR Grab Lab each got a thing! Various fixes also abound.

  • Proconsul walks on a big branch into the scene! I took the rhesus monkey and conformed it to Proconsul proportions. It was probably a bit easier than making it from scratch. Challenges: the rhesus polygons were rough with some ugly shapes around. They especially held back re-posing since the polygons were connected to each other awkwardly. Improvements: ear and eye details enhanced from the earlier appearance. Proconsul-specific challenges: There are a lot of opinions on how to classify proconsulids, including changing names for fossil specimens up to the genus level. I went with a larger Proconsul, P. nyanzae, and kept the Proconsul name due to my taxonomic lumping tendencies. Another challenge is that Proconsul is kind of plain? It is a notable prehistoric primate so that’s why it’s here. It is always presented walking on four legs on a tree branch because that’s what its traits are adapted for. A later change to the model was to add a little interest by giving the model a slightly twisted mid-step pose. I used other quadrupeds, especially baboons and mandrills for the pose reference.


  • The New Guinea human femur dagger I promised a while ago is now in the Grab Lab. It was a quick conversion from the original model using just normal maps. Thanks to Dartmouth College and Morphosource for making it available.


  • The Grab Lab tables now have a border of “interaction orange” for UI consistency.
  • Jamestown church picture replaced with another that showed the grave site better. The previous picture will be repurposed.
  • Re-render of the rooms to include Proconsul shadows. I also learned how to de-noise the baked texture in Blender so the shadows and highlights are a little more crisp with a little pixelation at the edges.
  • Woops, the first credits page, the one with my name and stuff, was invisible because the text wrapping was accidentally set to zero. It’s back with new info on the Creative Commons and open source nature of the models and code.

AnVRopomotron 1.1 Update Notes

July 6th, 2020

We’re skipping straight to 1.1 with a lot of improvements.

  • More tarsier! Poses 2 and 3 show stages of a leap and pose 4 shows the result of a successful pounce. Tarsier 1 also updated with a longer tail.


  • A small new wing has opened up in the Scale Model Hall, called the Burial Chamber. Learn about Jamestown and one of its founders, Captain Gabriel Archer. View a life size model of his grave.

burial chamber

  • Remodeling the building as a single model in Blender using Archimesh instead of as many separate walls and planes in A-Frame. The benefit should be faster loading paired with more impressive looks since the building is drawn using one instruction (draw call) for the whole thing instead of a separate instruction for each individual wall.

    { Newly remodeled. }


    { Old view. }


  • Also the Blender model has light fixtures in the ceiling so I can bake the shadows they cast to texture and present them in VR without a performance loss. In fact even with the new room this version has a
    1/3rd performance gain.
  • Moved scale models around to open up the room. The same space now has a lot more empty areas for future models.
  • Rewrote how occluding zones are processed for efficiency.
  • Unrelated to the above, I rewrote a new version of the old way of occluding zones. I tried to go extra fancy with complicated zones made in Blender that covered as much ground as possible. Aaand… it doesn’t work. For example, I had a U-shaped zone but being in the empty part in the center between the arms didn’t register as leaving it. Why???? Well it turned out my collisions are based on AABB colliders, you know, Axis-Aligned Bounding Boxes. The complex zones were being treated as rectangles instead of the shapes I gave them. No getting around that but I can work with this limitation now that I know it’s there. I made box zones as before, but rearranged the room so there are more walls blocking lines of sight for better zones. This works!
  • It turns out I never fully optimized the centerpiece model so it was being drawn with 9 draw calls. I tweaked the model so now it is drawn with 2 draw calls and the file lost another megabyte.
  • Redid Height Chart as a texture to drop draw calls from around 30 to 1. Rearranged heights and added the tall Australopithecus afarensis Chewie.
  • Moved text around to look nice.

See the previous updates.

AnVRopomotron 1.0.5 Update Notes

April 29th, 2020

A new version is here with some improvements I’m pretty proud of. Visit AnVRopomotron!

1.0.5 (04/29/2020)

  • The Philippine tarsier has clung to the Scale Model Hall! This is one of my favorite primates and a representative of one of the big primate lineages. The tarsier is modeled in ‘classic pose,’ holding onto a vertical branch, but this view hides some of their amazing traits. More poses to follow.


  • Info button model looks more like the letter i. For info.
  • Male mandrill skull in the Grab Lab.
  • Hippo ivory harpoon model fixed: normal map depths were too shallow.
  • Fixed bug introduced in 1.0.4 where stone tools could be picked up with the mouse in web browsers. No… VR only! The reason is that browser grabbing is awkward and I don’t want to push a subpar experience. But:
  • Holoprojectors in the Grab Lab. I figure I should help out browser users, who are 99% of my viewership. Clicking an object now shows a rotating scaled up model to really get into their details. VR users can do too that but can still just move an object closer to their face.


  • Since photos worked so well in the Scale Model Hall, a photo will appear when most objects are grabbed in the Grab Lab as well. Fossils do not have photos. Still trying to find good ones.
  • Little tweaks to text layouts around the place.

See the previous updates


Making Models

March 2nd, 2020

One half of AnVRopomotron used photogrammetric scans that others have made, but the Scale Model Hall half is mostly my own creation. When I was just getting started with A-Frame, I experimented with building models just from primitive shapes, but concluded that learning low-poly style modeling was an attainable and more rewarding method. I modeled a gorilla after giving up on a chimpanzee. The gorilla worked out because it had larger shapes than the details of a chimpanzee’s anatomy. By chance the gorilla is also the largest living primate, which made it an interesting subject of comparison in a scale-based VR setting. I made the first final draft of the gorilla and moved from there.

{ Gorilla v2 (added big toes and re-colored) and mouse lemur to show extremes in modern primate size. }

{ Gorilla v2 (added big toes and re-colored) and mouse lemur to show extremes in modern primate size. }

My second model was the mouse lemur, the world’s smallest living primate. I followed the same tutorial to get started using a mashup of photo references from the front and side. The mouse lemur has more defined eyes and digits than the gorilla so I worked those out by using the knife tool to make new edges and pushing them into the right shapes. As with the gorilla’s head, I rendered the hands separately before moving them into place. I made one hand, mirrored a duplicate, and moved the fingers into different positions before attaching it to the other wrist. To put the small primate at eye level, I quickly made a simple branch structure out of distorted cylinders that the lemur could sit on. 

{ Look at those little fingies! }

{ Look at those little fingies! }

Continuing the biggest-smallest theme, my next model was the Gigantopithecus, the largest known primate ever. The new challenge this time was that there were not clear photo references. Since this prehistoric ape is only known from individual bones and teeth, there were many reconstructions but no clear front or side views. The benefit of this situation is that I could make my own version of this animal. One detail of Gigantopithecus reconstructions, especially sculptures, is that they are shown comfortably standing on their legs. Based on living great apes, Gigantopithecus would have been just as awkward standing upright as a gorilla or orangutan. I wanted to model my Gigantopithecus standing up as well, since the size is more impressive, but in a more realistic way. I used photos of orangutans standing on two legs as my reference since Gigantopithecus was a fellow Asian great ape. What the photos show is that orangutans often support themselves with their arms, such as by reaching up to a support. Since that further accentuates the height of the shape, I used that in my model. I debated whether to include orangutan face flaps or not. I settled on modeling them using the general rule that fossils are modeled based on the closest living relative.


The next sculpt was a combination past and present. I wanted to make the smallest known primate, Eosimias, but there is even less known about its appearance than Gigantopithecus. As a basal anthropoid, some where before the split of the monkey, ape, and human lineages. Based on its location in the primate family tree, the best guess is that it was a tiny generic monkey. Instead of modeling a generic monkey from scratch, I modeled a rhesus monkey first, then scaled it down and added more arboreal traits like a long tail to make the Eosimias. I placed it on the highest finger of the Gigantopithecus to really show off the variation in primate sizes. It’s basically invisible unless if you know where to look, but the info window for that display mentions the tiny primate. Its place in the room was temporary, though.


Part of the reference-finding process is referring to scientific descriptions of the anatomy, especially for extinct species. While looking at Eosimias research, I noticed that the oft-repeated fact that it was “thumb-sized” does not actually show up in the peer-reviewed literature. The comparison is also unclear as used in other media since human thumbs are different sizes and different sources disagree on whether the primate is compared to a thumb’s length or width. The research papers also never said it was the smallest known primate, and actually said it was roughly mouse lemur sized. Looking for what is actually considered an undisputed smallest known primate was a challenge. The current level of research points towards Archicebus as the title-holder, just a little smaller than the mouse lemur. I remade my Eosimias model to conform to Archicebus measurements, such as eye orbit and hand size. Unfortunately Eosimias no longer had a place in the site. 

{ Slightly less speculative Archicebus. }

{ Slightly less speculative Archicebus. }

I felt that I needed practice in 3D modeling to lead up to the next model: modern Homo sapiens. Humans have especially complex shapes in biology and clothing (I was certainly not ready to model a nude human). A new detail to consider is who should be the ‘representative’ of our species in AnVRopomotron. Our species is very diverse in physical appearance across spectrums of sex, skin color, and silhouette. Any choice would be unrepresentative of many of us at some level. I settled on Jane Goodall since she is a primatologist who contributed greatly to anthropology and to conservation. There are also good photo references of her from many angles. I modeled her in a walking and talking pose with hands entwined in front for comparison with the other models. She’s also rendered clothed in her field outfit, which fortunately had simple shapes with only a few complexities such as pant cuffs and a collared top. The original model color is the beige from her clothes so it is skin color agnostic. I later changed the color, as detailed below, but still kept it far from representing a human skin color.

{ Centerpiece v1 with flat colors. The Lucy model was a rough draft at this point. }

{ Centerpiece v1 with flat colors. The Lucy model was a rough draft at this point. }


The Gigantopithecus, Archicebus, and modern Homo sapiens were part of the centerpiece of the app. The last model for the first thing the user sees is Lucy, the famous Australopithecus. Rendering the human model made Lucy much easier, taking only days instead of weeks to get a usable sculpt. I thought of shaping a copy of the human model to Lucy, but decided that I could start from scratch even faster. By this time I start with a lot of the limbs separate to be joined later. This allows for a lot of posing without deforming the attached polygons. The separate parts were the head, neck, torso, upper and lower arms, upper and lower legs, and each hand and foot. There were ample photo references available for Lucy specifically and australopithecines in general. I debated the pose for a while. Most reconstructions are of her walking or standing still (with special mention for the twisted contrapposto figure in Australopithecus and Kin {Ward and Hammond, 2016}. One thing I see in a lot of descriptions that I disagree with is the abundance of human features in Lucy, especially her face. Based on just the skeleton, Lucy’s head was ape-like instead of human-like so she would have been closer to chimps in expression and mannerisms. My idea was to give her chimpanzee expressions, but ones that humans could relate to. I referred to the Great Ape Dictionary website, which has a video database of chimpanzee gestures. I chose the reach-palm gesture for Lucy to be welcoming to the user and to show off her long arms. With one arm extended out, the model looked extremely off balance. Her other arm had to pull back to be a counterweight, along with turning the body. The result shows a plausible way for a primate with those proportions to pose. 

{ First final centerpiece. }

{ First final centerpiece. }

The centerpiece went through several changes before the final final version. At first, they were all flat shaded with representative colors. When I started to color the other models, such as the gorilla and mouse lemur, with more detail, I hesitated to do the same for the centerpiece. The main reason was that I really did not want the human figure to have a realistic skin color (unfortunately a case of no one gets represented so everyone is equal). A solution I settled on was to take the centerpiece towards a stylistic direction. I recolored all of the models to the same bronzy shade and use Blender’s materials to make it shine like polished metal. This made the centerpiece look like its own finished piece instead of a rough version of the other models. 

{ Ooh shiny! Lighting, shadows, and reflections are all baked into the texture map. }

{ Ooh shiny! Lighting, shadows, and reflections are all baked into the texture map. }

I then returned to the rhesus monkey. I wanted something to prop up the model to human height so it was more visible. I was going make a quick rock, but a chance comment from my brother (“are you going to model microscopic things?”) inspired me to find something more creative. Looking at protein databases online led to a usable model of the antibody that fights the Rhesus factor in human blood. I expanded that model to be a meter tall and perched the monkey on top of it, adjusting its hands and feet to contact the protein model. 

{ Rhesus macaque and anti-Rh antibody v1. Shading of the antibody is based on a normal map baked in Blender. }

{ Rhesus macaque and anti-Rh antibody v1. Shading of the antibody is based on a normal map baked in Blender. }

These models became the first set that populated my Scale Model Hall. A future post will describe the making of the later models and further changes I made to the originals. 



Ward, C. V. & Hammond, A. S. (2016) Australopithecus and Kin. Nature Education Knowledge 7(3):1

AnVRopomotron: The Origin

February 2nd, 2020

Over the last semester, I’ve been working on a virtual reality anthropology experience called AnVRopomotron. Getting the site to release involved a lot of learning in different areas. This is the start of a series of posts about the challenges I went through to get to make something I thought would be useful.


Ideas and Influences

Three influences led to the start of this project. One was my frustration during my lectures in conveying the size of modern and prehistoric primates. Particular visual aids helped, such as my versions of the life size Lucy and Nariokotome Boy cardboard cutouts from eFossils. There were so many other beings whose sizes could only be imaged with my verbal descriptions, though. (I tried drawing them on the whiteboard but it was time consuming and my depictions were poor.) Another influence was when I bought an Oculus Quest for myself on my birthday. The release date of this advanced and more-affordable VR coincided with my search for a present so I decided it was the right time to get into virtual reality. My only previous experience was a few minutes with the Tuscany demo using the first Oculus Rift at a comic convention. When I put on my own birthday goggles, I was completely absorbed by the technology and the worlds they presented. I conversed with Darth Vader (more accurately he towered over me when I fanboyed out). I used the Wander app to Google Street View every place I’ve ever been, then every museum I could find. I thought about making my own virtual museum. The third influence was actually years ago, when I pursued 3D modeling and printing as a hobby over winter break. I had no grand plans at the time, but working on my own 3D project gave me the starting skills to do something in virtual reality. If I could make my own VR experience, I could show students the size of the living things I talk about in class in a more immersive way.


How It’s Made

A-Frame turns WebVR programming into a HTML-like organized development platform. It simplifies the Three.js JavaScript library that has tools for VR rendering and adds its own components for manipulating objects, physics, and other conveniences. I went through the extensive documentation and tutorials then quickly put together a scene with a few scale polygons and some grabbable objects from the Internet.

{ Testing out polygon borders and grabbing. Note placeholder cylinders for what later became the bronze centerpiece. }

The terrifying beast was a ‘chimpanzee’ made of 3D primitive shapes like spheres and cylinders. That was my original idea for the scale models since they’re easy to make, but it turned out that they look extremely creepy.


{ Proto-chimp model made up of primitive shapes. Besides being creepy, the round shapes also used too many polygons. }

I also thought of drawing cardboard cutouts and standing them up in 3D, but I concluded that I might as well make the most of the three dimensions by hand sculpting models myself. I had no desire to make photorealistic models, but I settled on making ‘low-poly’ figures that captured the general shapes of the subjects.

I settled on Blender as my 3D modeling program. When I made a 3D character for printing before, I actively avoided using Blender, the popular, powerful, free, but also complicated 3D modeling program. Years later, a lot of the software I learned have never updated or have been officially abandoned. I took on learning Blender to stay current with the software. It turns out that Blender has its quirks, but it is very understandable. Between then and now, from version 2.79 to 2.80, the whole user interface was rearranged for the better. It also helped that Youtube is full of Blender tutorials at all levels. There was even an extremely clear low-poly animal tutorial which directly applied to my goal. As I encountered challenges in Blender, a search for tutorials usually led to some solution I could use.

One challenge with modeling is that primates have very complex shapes. I envied the tutorial I followed because they were rendering a giraffe. Primates have more going on, such as fingers and toes instead of hooves, and nuanced curves in the head. I started out trying to render a chimpanzee from a photo reference, but it was a mess. I found a good side view of a gorilla and that became the first model I kept. I left off a lot of features, such as eyes, ears, nostrils, and individual digits except for the big toes. I ended up rendering the head separate from the body and joining the parts together once the features were defined. Those conceits aside, I was satisfied with the recognizable gorilla-ness of my model.

{ First version of the gorilla model with the crude chimpanzee. The yellow block is literally a meter stick to check scale. }

{ First version of the gorilla model with the crude chimpanzee. The yellow block is literally a meter stick to check scale. }

In the next post I will detail the making of the other models that I made, including a return to the gorilla to apply new things I learned. Until then, enjoy the experience!

AnVRopomotron 1.02

Introducing AnVRopomotron

January 13th, 2020

I am extremely excited to announce the availability of AnVRopomotron, a project that I have been working on for the past semester. It is a WebVR site that allows you to experience the scale and shape of anthropological forms. Right now, there is a small collection of models I made that reflect some of the largest and smallest primates past and present. There is also an array of 3D models made by others of fossils, bones, and artifacts. The artists have graciously bestowed these models with Creative Commons licenses so they could be adapted for use in my site. The AnVRopomotron experience works on smartphones and desktop web browsers, but it really shines in virtual reality. Using VR goggles, you can see the models relative to your own height and handle the small objects yourself. You should check it out now by clicking the image below and come back for an overview of what it does.

{ Click to Enter }

{ Click to Enter }

AnVRopomotron is a virtual museum of biological anthropology, with some archaeology included too. I aim to make the layout intuitive since many people are unfamiliar with virtual reality. From the start, you can see the centerpiece of the room, which shows four models that represent the breadth of biological anthropology. Along with the smallest and largest primates that have ever lived are the most famous fossil individual, Lucy, and a familiar-looking modern Homo sapiens. From there, you can get a closer look at the centerpiece, or explore one of the two modes that are visible to either side.

To the left is the Scale Model Hall of primates from the very large to small and from the past to present. I made most of these models myself based on many images online as well as scientific papers describing fossils. (A future post will delve into my modeling process and tips I have learned). My goal is cover the main primate lineages and key fossils first, and then model primates with unusual anatomy. Touching the orange information orbs on stands shows panels of information for nearby models.

On the right is the VR Grab Lab, where users with VR goggles and controllers can manipulate the objects to see them from all angles. On desktop browser and smartphones, the tables are raised to a more convenient height. (Another future blog post and Youtube tutorial will describe how I convert 3D models to be VR friendly). Touching an object will turn an information panel on and off on a nearby wall. The items reflect the availability of a good quality scan and its usefulness in showing something important about biological anthropology or archaeology. If you own a 3D model that you think would be a good fit, let me know!



This site would not have been possible without the help of a lot of programers. I used A-Frame to make AnVRopomotron. A-Frame turns WebVR programming into a HTML-like organized development platform. It simplifies the Three.js JavaScript library that has tools for VR rendering and adds its own components for manipulating objects, physics, and other conveniences.


Known Issues

While I worked to make the site as smooth as possible, there are a few issues in AnVRopomotron that I have not solved at release. If the site is refreshed, sometimes the grabbable objects are launched every which way or dropped on the floor. (There will be a future post on the strange bugs that I’ve run into). A more irksome limitation is that I could not make the site work via Google Cardboard, which is an accessible VR platform that works through smartphones. The issue is a combination of my Javascript inexperience and the goals of the A-Frame team. Instead, the site will attempt to access the device motion data for a ‘magic window’ view instead of full 3D. Lastly, touching the objects and orbs can also be finicky as the controls are very sensitive. Try to grab in VR with just one button instead of a realistic grip motion. If you have any leads on how a beginner Javascripter can solve these issues, let me know that too.

I will keep working on both halves of the site and add new features as I figure them out. I hope you enjoy what I made and find it educational for yourself and others.

A Real Guide to Virtual Museums

January 1st, 2020

Virtual reality presents an immersive way to learn about the natural world. There are apps for astronomy, historical sites, and lived experiences. One extremely powerful educational app is Wander, which turns Google Street View into a VR experience. I toured famous locations, found every place I’ve ever lived, and played a game with myself where I hit the random button and try to figure out where I am. I used the app the most for viewing museums around the world. Here is a list of my favorite tourable (not terrible) museums on Google Street View. 


Houston Museum of Natural Science (official site)

This modern looking museum is a treat in virtual reality. The prehistory section is expansive, with a lot of specimens set on inobstrusive white blocks. The paths through the section show a great use of winding corridors to make a space seem much more extensive than it really is. In real life, crowds could be a problem, but that’s no concern in VR. The poses of the fossils and models are extremely creative, along with colorful lighting. Watch a skeletal Homo sapiens get yeeted by a mammoth as another person takes aim with an atlatl. There are also mapped wings on local ecology, Egypt, and Precolumbian Americas. The VR experience makes this museum somewhere I have to go in real life.




Naturhistorisches Museum Wien (Vienna)

This museum is in an opulent and historical building, but it is also a Tardis as it is far larger on the inside. The density of locations is so high that it can be confusing to navigate using the arrows and the map gives up and shows a blank building. However, if you click around you’ll be randomly teleported to any number of rooms of painstakingly organized specimens. There are rows and rows of present day biology, numerous dinosaur skeletons, and rooms representing human evolution. Paintings and interesting architecture frame every view. Tantalizingly, the room for the Venus of Willendorf is not mapped.



The Field Museum (Chicago)

This museum was a must-stop for me whenever I went to Chicago, and it is still impressive in virtual reality. From the old location of Sue, you can tour the also-famous habitat dioramas that wind through different regions and lineages of animals. On the other side, halls of Native American history and culture can be seen. 


American Museum of Natural History (New York City)

In real life, this is the best museum I’ve been to. The classic dioramas of taxidermied and modeled animals are viewable, though the two-story rooms can be hard to navigate. Random clicking may take you to either floor. It’s worth it to see each scene that encapsulates the ecology of a little slice of our planet, though. The halls of extinct dinosaurs and mammals, human evolution and cultural anthropology are also present.


Museo Nacional de Antropología (Mexico City)

The grandeur of this museum, which must be the largest anthropology museum in the Americas if not the world, shows through even in Street View. See the amazing displays of prehistoric Native American life, especially Aztec artifacts. There are many lifesize structures, from small living spaces to massive temple facades to take in. The northern wing also has human evolution exhibit, with a lot for paleoanthropologists and bioarchaeologists to view.


Those are big explorable museums on Google Street View or VR. In my scouring of the world for viewable museums, I also ran into a lot of smaller natural history spaces from Australia to Korea. Once you’re done with these, maybe we can go through some of the deeper cuts in virtual museums.

Announcing Chapter 12 of Explorations: An Open Invitation to Biological Anthropology

November 25th, 2019

My chapter in the new Open Education Resource (OER) Biological Anthropology textbook, Explorations: An Open Invitation to Biological Anthropology is now available! I was super proud to have been a part of this project and I researched and wrote something that I am very happy to see. I wrote Chapter 12: Modern Homo sapiens, which covers what happened to our direct species from 315,000 years ago to the distant future. There’s skeletal changes, artifacts, interbreeding, geographical expansion, technology, and more. The information and ideas I present are based on my own synthesis of active research in these areas. I contributed the text, but the illustrators deserve a lot of praise. They turned my crude mockups into clear figures with style.  

If you found this as a student reader of my chapter, then welcome! I worked hard, with the assistance of editors and reviewers, to present everything in a clear way and in an organized manner. When there are fascinating topics and information that I had to omit for the word count (I passed the limit already), I give you leads to pursue them on your own. The domestication of the chicken comes to mind. I also included Easter eggs that may perk you up as you read. Here are some questions that will point you towards some of them:

  • Which Mortal Kombat character’s name appears in the chapter?
  • In what context was renowned actor Nicholas Cage mentioned?
  • Which extinct megafauna is a favorite of researchers?
  • Where did the author get most of his coffee to fuel his writing?

On a more serious note, some textbook mainstays have been left aside on purpose so you can take a more active role in consolidating the information. Make your own mind maps, tables or illustrations of the following topics to get a big picture view of what went on in human evolution:

  • Skeletal Traits of Modern Homo sapiens
  • Timeline of Modern Homo sapiens Expansion
  •  Stone Tool Styles, Dates, and Features
  • Types of Human Social Organizations

If you found this writeup inadvertently, I hope you will give my free chapter a read. You’re in for a treat! I found so many fascinating discoveries and ideas in the research for this chapter. It is a human universal to ponder our origins, and what my chapter collects is what science has found about where we came from. You’ll learn that our past was complex, but full of wonder and even inspiration. The goal of the textbook was to be accessible so the writing gets out of your way from the knowledge within. Give the other chapters a read to learn what happened in human evolution before my chapter, and other topics about humans and other primates. 

For you professionals and already-informed who have read my chapter, thank you for checking out my take on modern human origins. As you know, our understanding of our own prehistory is still limited in many areas. What I wrote about topics such as behavioral modernity and the peopling of the Americas was my own stance informed by research. While I present multiple hypotheses in these situations, I purposely made it clear which one I believe is the strongest. Maybe future work will turn the tide another direction (especially ‘behavioral modernity,’ which seems like it is on the edge of a revolution), but the textbook reflects my most certain synthesis at the time of writing. This chapter reflects my long informed answer to the questions “where did our species come from and how are we what we are today?”

Overall I really enjoyed the challenge of writing a textbook chapter and the result as well. It means a lot to be connected with other educators and anthropologists in this textbook project and our collective work is amazing. Here is the link to the textbook website and my own chapter again because I really want you to read it.


Previous posts on the textbook:

Anthropomotron is Now Web-Only

July 4th, 2019

Well, July snuck up on me. Anthropomotron has left the App Store. There are two reasons for this move. One is that stature estimation has been a stable area of research with no major developments that demand further development of the app. The other reason is that it is pricey to keep the app on the iOS App Store. Combined, these reasons mean that I am paying a yearly fee for my app to sit in the store and be downloadable. If you are one of the 5,095 people who downloaded Anthropomotron for iOS, thank you and it should always be there unless if you delete it. I will leave the current version on the Google Play store for Android since there is no charge to do that, but I won’t develop it further. The web version will be the only official one and the only platform I will update. Thank you to everyone who downloaded, used, or shared Anthropomotron! I’ve only heard positive and constructive feedback, which I appreciate.

Here is web Anthropomotron.