Connect with us


Review: The iPhone X Goes To Disneyland



Alan Dye, who was responsible for leading the software design teams that had to decide how to handle the sensor package, says that it felt the most honest.

“We’ve got this amazing True Depth camera system packed into this space at the upper center of the display. And we thought a lot about how to design for that. And ultimately we felt really comfortable with this notion of being really honest about it and allowing for the content to push out into those beautiful rounded corners,” says Dye.

Dye says that Apple did consider using digital bezels. “We did look at various different design iterations and considered some things that kind of acted as digital bezels if you will. But ultimately we never really felt comfortable with this notion of cropping into the content. We really love the new display, we love that it’s edge-to-edge. We love the way that it fits. It feels so perfectly designed for the overall form and so we’re encouraging people just to kind of push the content right out to the corners.”

In use, I have to say, the notch is just zero problem for me. I don’t give a rat’s ass about it. I know I’ll probably catch heat but I’m not carrying water for Apple here. I think it is absolutely a compromise but, after using Face ID and the True Depth camera for other stuff, I am willing to deal with it.

And beyond “dealing with it” I can tell you that as one of a few people outside of Apple to have used it for more than a day — you stop noticing it very, very quickly. It’s a part of the display, the areas to the sides are or aren’t used and that’s it. Major apps like Instagram and Facebook have already been updated for the iPhone X screen and they look fine. Apple had to do some serious engineering to make it to the corners too, as the OLED is flexible.

Watching video in landscape defaults to cropped in and I largely forget to zoom it. YouTube’s new app reminds you to pinch to zoom out so it fills the screen and it looks cool in my opinion. I think it’s neat and a bit futuristic. I’ve been waiting for asymmetrical screens that are tailor made for their use case forever. They’re in every sci-fi movie ever and we’ve all been stuck with rectangles since the iPhone hit. I’m okay with a change.

I (about half jokingly) called the True Depth area a “flap” back in September. Given that when you minimize an app you can see that it is a whole “card” that slides out from underneath – and screenshots show the area filled in, I am technically correct about that. In the interface design world of the iPhone X it is a flap that covers that area, not a notch that cuts that area out. It means nothing but if you like completely malleable digital content to conform to a definite physicality, this paragraph was for you.

If, however, you use your iPhone for data entry or browsing or whatever in landscape, the True Depth camera is going to be bang in your way, especially if it’s on the left. No getting around it. If that bothers you, don’t get an iPhone X. But even if you think it’s going to bother you I’m not sure it actually will once you spend a few days with it.

Which is sort of the mantra of the iPhone X: Give it a few days and it all gets a lot clearer.

Using iPhone X

Day one of using an iPhone X is profoundly strange and cumbersome in a lot of ways. If you’ve spent years whacking a home button you’re not going to be able to break down those memetics in a couple of hours. I had to get used to swiping up, across, down and up again instead of tapping the button, double tapping the button or double tapping and swiping.

Day two is better. Some actions already felt super natural, like tapping on the screen to wake it, swiping across the home bar to switch directly from one app to the next, rather than bringing up the heavy app switcher. Quick and light. Other actions like quickly dropping out of an app still result in a mash of a home button that doesn’t exist.

Day five is the turn. The point at which the hardest habit to break, tapping the home button to move from any other screen to your home screen, is starting to break up.

Day six is when things started getting weird with my old iPhones. I started swiping the home button up and staring stupidly at the screen waiting for it to automatically unlock.

Anecdotally, I got the phone on a Monday and until Saturday I was still stabbing the home button to go home. Today, a week later as I write this, I swiped the home button on my iPhone 7 to try to unlock it. So give it a week or so to acclimate.

Once you do, it’s sweet. The faster 120hz refresh rate of the touch screen means that every action is buttery smooth and reacts immediately to your touch. If it didn’t, the whole thing would break down. You no longer have the affordance of the time it takes your finger to leave the home button and reach up to hit the screen before you take action on something. Everything has to happen immediately because your finger never leaves the screen. And that never leaving the screen is so key.

From opening the phone to flipping back and forth between apps to closing one and opening another, it’s all action start to finish. There is no more “out to the home button and back to the screen” bouncing. It’s super-fast and fluid and makes it feel like you’re getting more done more quickly.

The switching from app to app action is not an issue at all on the fingers or hand, by the way. I know there was some super awkward spy stuff out there but you just swipe along the bar left to right or right to left to swap apps. It’s easy and relaxed. If you want to access the switcher with the “swipe up and pause” action, you can, but I don’t see any major need for it.

Grabbing Control Center with your left hand is rough work, and I’m still not sold on the placement of it in the top right corner, or the fact that the controls are at the top.

When you’re walking around with a kid in one arm and trying to snag a FastPass for your next ride and you need to adjust brightness or toggle screen lock or anything like that it is damn near painful to do it the regular way. And it’s only slightly more pleasant using your right hand.

Which is why I am so glad that reachability still exists. It is incredibly useful here. It’s also tied to a much more intuitive activation process. You can pull the whole top of the screen down with a slight “tug down” of the home bar. Then, Control Center is easily reachable with your right hand and at least not impossible with your left. Reachability is now tucked away in Accessibility, if you’re reading this and looking for it.

The strongest recommendation I can make for the new “no home button” paradigm is that after just a week, regular home button actions like double tapping feel much too heavy after just a week of using it. Ten years of the home button, it turns out, was enough to allow us to move on.

Another interface tidbit: I really like the new force-press to activate the camera on the home screen. It feels much more definitive than the fumbly “swipe from right to left” that could go awry on a notification or not trigger because you didn’t quite hit the edge.

I took no special care to preserve battery beyond what I normally would, which is to try to stay off Twitter at Disneyland (you can see that I failed fairly miserably in this regard). The temperature was in the low 90s for the most part, which isn’t crazy for Southern California, but doesn’t do batteries any favors. The reception is still fairly poor in many areas of the park and the radio goes to seek a lot inside rides, leading to greater battery drain. Despite that, and despite the fact that I shot hundreds of photos, the battery lasted all day.

I started the day by unplugging the charger at around 8:24 and skated into our hotel room at about 9:11 PM at 6 percent on power save mode. Not a bad 13 hours 2 minutes on standby and 6 hours, 4 minutes of usage in such punishing conditions. This is far less than I’d expect to get on any typical day, but not at the parks, where batteries go to get tortured. My iPhone 7 did not make it the full day. The iPhone 8 Plus made it, but I didn’t use it as heavily when I wasn’t shooting comparison photos. And the battery is larger.

Physically, the iPhone X is great. Gorgeous, shiny, it looks just fine. It feels heftier and denser like a piece of high-quality watchmaking. The chrome-like stainless steel ring around the phone is picking up some fine abrasions but they look normal, and I tend to run without a case and scratch the junk out of my phones, so it’s not an alarm bell issue. The glass back still looks great, with a multiple layer backing that has a very light pearlescent sheen below the top sheet of glass. I also like that they cut down on trying to “bevel” the camera bump. It is what it is and it looks just fine with as minimal a bezel as possible. From the front, well, you get the screen and you get the notch/flap/True Depth camera array.


These high-speed ‘nano-cranes’ could form molecular assembly lines



Things aren’t going well down at the ol’ nano-factory. They’re having trouble getting all those tiny workers to synchronize and move quickly together. But leave it to the Germans to get things running smoothly! All it took was a careful application of that newfangled technology “electricity.”

Tiny nano-scale machines formed from DNA could be the future of manufacturing things at small scale but great volume: drugs, tiny chip components, and of course more nanomachines. But moving simple, reusable machines like a little arm half a micrometer long is more difficult than at human scale. Wires for signals aren’t possible at that scale, and if you want to move it with a second arm, how do you move that arm?

For a while chemical signals have been used; wash a certain solution over a nanobot and it changes its orientation, closes its grasping tip, or what have you. But that’s slow and inexact.

Researchers at the Technical University of Munich were looking at ways to improve this situation of controlling machines at the molecular scale. They were working with “nano-cranes,” which are essentially a custom 400-nanometer strand of DNA sticking up out of a substrate, with a flexible base (literally — it’s made of unpaired bases) that lets it rotate in any direction. It’s more like a tiny robotic finger, but let’s not split hairs (or base pairs).

What Friedrich Simmel and his team found, or rather realized the potential of, was that DNA molecules and therefore these nano-cranes have a negative charge. So theoretically, they should move in response to electric fields. And that’s just what they did.

They attached tiny fluorescent pigment molecules to the tips of the cranes so they could see what they were doing in real time, then observed the cranes as the electric field surrounding them was carefully changed.

To their great delight, the cranes moved exactly as planned, switching from side to side, spinning in a circle, and so on. These movements are accomplished, the researchers say, at a hundred thousand times the speed they would have been using chemicals.

A microscopic image of the nano-crane’s range of motion, with the blue and red indicating selected stop points.

“We came up with the idea of dropping biochemical nanomachine switching completely in favor of the interactions between DNA structures and electric fields,” said Simmel in a TUM news release. “The experiment demonstrated that molecular machines can be moved, and thus also driven electrically… We can now initiate movements on a millisecond time scale and are thus 100,000 times faster than with previously used biochemical approaches.”

And because the field provides the energy, this movement can be used to push other molecules around — though that hasn’t been demonstrated just yet.

But it’s not hard to imagine millions of these little machines working in vast (to them) fields, pushing component molecules towards or away from each other in complex processes or rolling products along, “not unlike an assembly line,” as Simmel put it.

The team’s work, which like most great research seems obvious in retrospect, earned them the coveted cover story in Science.
Featured Image: TUM Readmore

Continue Reading


Tile lays off dozens after a disappointing holiday



Tile, one of the best known item-tracking gadgets out there, has laid off some 30 people and reportedly stopped the potential hires of another 10, TechCrunch has learned. This comes less than a year after the company raised a $25 million B round last May. The layoffs are reportedly due to disappointing sales over the holidays.

When reached for comment, Tile offered the following statement:

As part of our 2018 planning process, the Tile leadership team determined that a recalibration of our priorities was necessary so that the company can focus on the development of our Tile Platform business and core hardware products. Unfortunately, this means that we had to say goodbye to roughly 30 Tile colleagues. Tile remains the leader in smart location, and we will continue creating a world where everyone can find everything that matters.

The roughly 30 employees being recalibrated weren’t solely from any one area, according to information provided to TechCrunch, so it seems as the company says to be a general cost saving measure. A Tile representative pointed out that a hiring freeze was not announced, so the 10 hires that were reportedly prevented from taking place are still a bit of a question mark.

Tile revamped its product line late last summer, improving range and adding two new “Pro” units: a sporty one for active types and a fancy white-and-gold “Tile Style.” Perhaps it was too little, too late, or perhaps Tile has become too popular for its own good and everyone already has all the Tiles they need.

At CES, it announced a handful of new partners that will integrate Tile tech into their products. This is reportedly the new focus of the company — being a platform-first rather than a hardware-first company. No doubt the devices will still be made and sold, of course, but it won’t be the totality of the Tile offering.

Here’s hoping it works and these layoffs are the last we hear of.

Continue Reading


The BecDot is a toy that helps teach vision-impaired kids to read braille



Learning braille is a skill that, like most, is best learned at an early age by those who need it. But toddlers with vision impairment often have few or no options to do so, leaving them behind their peers academically and socially. The BecDot is a toy created by parents facing that challenge that teaches kids braille in a fun, simple way, and is both robust and affordable.

Beth and Jake Lacourse’s daughter Rebecca (that’s her up top playing with the prototype) was born with Usher Syndrome, a common cause of blindness and deafness. After finding existing braille toys and teaching tools either too basic, too complex, or too expensive, they decided to take matters into their own hands.

Jake happens to have a background in product design, having worked for years at a company that creates simple, durable environmental sensors. But this was a unique challenge — how to make a toy that doubles as a braille teaching aid? Months later, however, he had created a prototype of a production device, albeit with a one-off 3D printed case.

You can see it in action at the TechCrunch booth at CES here:

The BecDot has a colorfully lit surface on which toys equipped with NFC tags (programmed through an app) can be placed. Once the tag is detected, for instance on a toy cow, up to four braille letters appear, formed by lifted pegs: C-O-W. The device can also emit a sound uploaded by the parent or teacher.

It’s simple, yes — as toys should be for kids this age. Yet it affords blind and partially sighted kids the opportunity to learn the alphabet and identify short words at the same time and in much the same way as sighted children. And with the sounds, lights, and the possibility of integration with books and lessons, kids will likely find it plenty of fun.

Here it’s worth noting that kids with disabilities often suffer doubly, first from simply not having the same senses or mobility as other kids, but secondly from the social isolation that results from not being able to interact with those kids as naturally as they interact with one another. This in turn causes them to fall further behind, isolating them further, and so on in a self-perpetuating cycle. This effect snowballs as time goes on, shrinking kids’ prospects of higher education and employment. We’re talking 70 percent unemployed here.

The BecDot and devices like it could help short circuit that cycle, both allowing kids to connect with others and learn on their own through play.

One of the things holding back devices like this is the complexity and cost of braille displays. If you think what’s behind an LCD is complicated, imagine if every pixel needed to actually move up and down independently and withstand frequent handling. The braille equivalents of e-readers can cost thousands to display a sentence or two at a time — but of course kids don’t need that.

Unsatisfied with the available options, Jake decided to engineer his own. He created a simple Scotch yoke mechanism that can control up to three dots at a time, meaning two of them can create a braille letter. It’s all controlled by an Arduino Uno. Simple means cheap, and the other parts are far from expensive; he told me that his bill of materials right now is around $50, and he could probably get it below $30.

Such a low cost would make the BecDot highly attractive, I should think, for any school with vision-impaired students. And of course there’s nothing stopping sighted kids from playing with the gadget either, as I’m sure they will.

Right now the BecDot is only in prototype phase, but the Lacourses sounded optimistic during CES, when I met with them. They’d been selected for a reward and exhibition by Not Impossible, an organization that creates and advocates for tech in the humanitarian space. Jake tells me that their time at the show exceeded his expectations, and that they got a chance to speak with people who can help both move the device towards market and advance the message he and Beth are trying to get out.

Toys like this (follow-up devices could have more letters or spaces for input) could help close the literacy and socialization gap that leads to many deaf and blind people being unemployed and dependent on others later in life. And having educational toys aimed at underserved, marginalized, and at-risk populations seems obvious in retrospect. It’s a simple idea in some ways, but only made possible by a creative and innovative application of technology and, of course, love.


Continue Reading

Subscribe to our Newsletter