95 stories
·
1 follower

Stanley FatMax 33-725 25' Tape Measure Buy One, Get One FREE - $19.97

1 Comment
Stanley FatMax 33-725  25' Tape Measure Buy One, Get One FREE - $19.97

Thumb Score: +21
Amazon.com has 2-Ct Stanley 25' FatMax Tape Measure on sale for $19.97. Add 2 to cart to receive Buy One Get One Free Discount (shown during checkout). Shipping is free with Prime or on orders $25+. Thanks sauceman2534

Deal Editor's Notes & Price Research: [LIST][*]Don't have Amazon Prime? Students can get a free 6-Month Amazon Prime trial with free 2-day shipping, unlimited music, unlimited video streaming & more.
Read the whole story
Lythimus
3 days ago
reply
how else would I be able to measure my measuring tape?
Share this story
Delete

VME Broken on AMD Ryzen

1 Comment

That’s VME as in Virtual-8086 Mode Enhancements, introduced in the Intel Pentium Processor, and initially documented in the infamous NDA-only Appendix H.

Almost immediately since the Ryzen CPUs became available in March 2017, there have been variouscomplaints about problems with Windows XP in a VM and with running 16-bit applications in DOS boxes in Windows VMs.

After analyzing the problem, it’s now clear what’s happening. As incredible as it is, Ryzen has buggy VME implementation; specifically, the INT instruction is known to misbehave in V86 mode with VME enabled when the given vector is redirected (i.e. it should use standard real-mode IVT and execute in V86 mode without faulting). The INT instruction simply doesn’t go where it’s supposed to go which leads to more or less immediate crashes or hangs.

How did AMD miss it? Because only 32-bit OSes are affected, and only when running 16-bit real-mode code. Except with Windows XP and Server 2003 it’s much worse and they may not even boot.

To be clear, the problem is not at all specific to virtualization. It has been confirmed on a Ryzen 5 1500X running FreeDOS—which comes with the JemmEx memory manager, which enables VME by default. Until VME was disabled, any attempt to boot with JemmEx failed with invalid opcode exceptions. After disabling VME, FreeDOS worked normally.

That is not surprising because when the problematic INT instruction is executed inside a VM using AMD-V, it is almost always executed without any intervention from the hypervisor, which means the hypervisor has no opportunity to mess anything up.

Now, back to the XP trouble. Windows NT supports VME at least since NT 4.0 and enables it automatically. That is the case for NT 4.0, XP, Windows 7, etc. For the most part, it would only matter when running a 16-bit DOS or Windows application (such as EDIT.COM which comes with Windows).

Windows XP and Server 2003 (that is NT 5.1 and 5.2) is significantly more affected because it was the first Windows OS that shipped with a generic display driver using VBE (VESA BIOS Extensions), and the only Windows family which executed the BIOS code inside NTVDM (with VME on, if available). Starting with Vista, presumably due to increased focus on 64-bit OSes where V86 mode is entirely unavailable, the video BIOS is executed indirectly, likely using pure software emulation.

The upshot is that the problem is visible in Windows versions at least from NT 4.0 and up, but XP and Server 2003 may entirely fail to boot, either hanging or crashing just before bringing up the desktop.

The workaround is simple—if possible, mask out the VME CPUID bit (bit 1 in register EDX of leaf 1), which is something hypervisors typically allow. Windows does not require VME and without VME, XP can be booted normally on Ryzen CPUs, at least in a VM.

Let's block ads! (Why?)

Read the whole story
Lythimus
9 days ago
reply
I just built a Ryzen machine. The first thing I was going to do was boot FreeDOS and run memtest86+ on it, ha.
Share this story
Delete

Vector barrels ahead with its small-satellite launcher

1 Comment

Vector Space Systems

Vector Space Systems successfully launched a full-scale model of its Vector-R rocket on Wednesday in Mojave, California. The test flight, which remained under 50,000 feet for regulatory purposes, allows the company to remain on track to begin providing launch services for small satellites in 2018, said Jim Cantrell, the company’s chief executive and cofounder.

The Arizona-based rocket company is one of a handful of competitors racing to the launch pad to provide lower-cost access to space for small satellites. These satellites are generally under 500kg in mass and often much smaller (the industry trend is toward smaller, lighter, more capable satellites). The Vector-R rocket will eventually be capable of launching a payload of up to 45kg to an orbit of 800km above the Earth. Other companies trying to reach this market include US-based Virgin Orbit and New Zealand-based Rocket Lab. Neither company has begun commercial launches.

Read 5 remaining paragraphs | Comments

Read the whole story
Lythimus
19 days ago
reply
Is the rocket or the truck the small-satellite launcher?
Share this story
Delete

Teaching Machines to Draw

1 Comment

Abstract visual communication is a key part of how people convey ideas to one another. From a young age, children develop the ability to depict objects, and arguably even emotions, with only a few pen strokes. These simple drawings may not resemble reality as captured by a photograph, but they do tell us something about how people represent and reconstruct images of the world around them.

Vector drawings produced by sketch-rnn.
In our recent paper, “A Neural Representation of Sketch Drawings”, we present a generative recurrent neural network capable of producing sketches of common objects, with the goal of training a machine to draw and generalize abstract concepts in a manner similar to humans. We train our model on a dataset of hand-drawn sketches, each represented as a sequence of motor actions controlling a pen: which direction to move, when to lift the pen up, and when to stop drawing. In doing so, we created a model that potentially has many applications, from assisting the creative process of an artist, to helping teach students how to draw.

While there is a already a large body of existing work on generative modelling of images using neural networks, most of the work focuses on modelling raster images represented as a 2D grid of pixels. While these models are currently able to generate realistic images, due to the high dimensionality of a 2D grid of pixels, a key challenge for them is to generate images with coherent structure. For example, these models sometimes produce amusing images of cats with three or more eyes, or dogs with multiple heads.

Examples of animals generated with the wrong number of body parts, produced using previous GAN models trained on 128x128 ImageNet dataset. The image above is Figure 29 of
Generative Adversarial Networks, Ian Goodfellow, NIPS 2016 Tutorial.
In this work, we investigate a lower-dimensional vector-based representation inspired by how people draw. Our model, sketch-rnn, is based on the sequence-to-sequence (seq2seq) autoencoder framework. It incorporates variational inference and utilizes hypernetworks as recurrent neural network cells. The goal of a seq2seq autoencoder is to train a network to encode an input sequence into a vector of floating point numbers, called a latent vector, and from this latent vector reconstruct an output sequence using a decoder that replicates the input sequence as closely as possible.
Schematic of sketch-rnn.
In our model, we deliberately add noise to the latent vector. In our paper, we show that by inducing noise into the communication channel between the encoder and the decoder, the model is no longer be able to reproduce the input sketch exactly, but instead must learn to capture the essence of the sketch as a noisy latent vector. Our decoder takes this latent vector and produces a sequence of motor actions used to construct a new sketch. In the figure below, we feed several actual sketches of cats into the encoder to produce reconstructed sketches using the decoder.
Reconstructions from a model trained on cat sketches.

It is important to emphasize that the reconstructed cat sketches are not copies of the input sketches, but are instead new sketches of cats with similar characteristics as the inputs. To demonstrate that the model is not simply copying from the input sequence, and that it actually learned something about the way people draw cats, we can try to feed in non-standard sketches into the encoder:
When we feed in a sketch of a three-eyed cat, the model generates a similar looking cat that has two eyes instead, suggesting that our model has learned that cats usually only have two eyes. To show that our model is not simply choosing the closest normal-looking cat from a large collection of memorized cat-sketches, we can try to input something totally different, like a sketch of a toothbrush. We see that the network generates a cat-like figure with long whiskers that mimics the features and orientation of the toothbrush. This suggests that the network has learned to encode an input sketch into a set of abstract cat-concepts embedded into the latent vector, and is also able to reconstruct an entirely new sketch based on this latent vector.

Not convinced? We repeat the experiment again on a model trained on pig sketches and arrive at similar conclusions. When presented with an eight-legged pig, the model generates a similar pig with only four legs. If we feed a truck into the pig-drawing model, we get a pig that looks a bit like the truck.

Reconstructions from a model trained on pig sketches.
To investigate how these latent vectors encode conceptual animal features, in the figure below, we first obtain two latent vectors encoded from two very different pigs, in this case a pig head (in the green box) and a full pig (in the orange box). We want to get a sense of how our model learned to represent pigs, and one way to do this is to interpolate between the two different latent vectors, and visualize each generated sketch from each interpolated latent vector. In the figure below, we visualize how the sketch of the pig head slowly morphs into the sketch of the full pig, and in the process show how the model organizes the concepts of pig sketches. We see that the latent vector controls the relatively position and size of the nose relative to the head, and also the existence of the body and legs in the sketch.
Latent space interpolations generated from a model trained on pig sketches.
We would also like to know if our model can learn representations of multiple animals, and if so, what would they look like? In the figure below, we generate sketches from interpolating latent vectors between a cat head and a full pig. We see how the representation slowly transitions from a cat head, to a cat with a tail, to a cat with a fat body, and finally into a full pig. Like a child learning to draw animals, our model learns to construct animals by attaching a head, feet, and a tail to its body. We see that the model is also able to draw cat heads that are distinct from pig heads.
Latent Space Interpolations from a model trained on sketches of both cats and pigs.
These interpolation examples suggest that the latent vectors indeed encode conceptual features of a sketch. But can we use these features to augment other sketches without such features - for example, adding a body to a cat's head?
Learned relationships between abstract concepts, explored using latent vector arithmetic.
Indeed, we find that sketch drawing analogies are possible for our model trained on both cat and pig sketches. For example, we can subtract the latent vector of an encoded pig head from the latent vector of a full pig, to arrive at a vector that represents the concept of a body. Adding this difference to the latent vector of a cat head results in a full cat (i.e. cat head + body = full cat). These drawing analogies allow us to explore how the model organizes its latent space to represent different concepts in the manifold of generated sketches.

Creative Applications
In addition to the research component of this work, we are also super excited about potential creative applications of sketch-rnn. For instance, even in the simplest use case, pattern designers can apply sketch-rnn to generate a large number of similar, but unique designs for textile or wallpaper prints.

Similar, but unique cats, generated from a single input sketch (green and yellow boxes).
As we saw earlier, a model trained to draw pigs can be made to draw pig-like trucks if given an input sketch of a truck. We can extend this result to applications that might help creative designers come up with abstract designs that can resonate more with their target audience.

For instance, in the figure below, we feed sketches of four different chairs into our cat-drawing model to produce four chair-like cats. We can go further and incorporate the interpolation methodology described earlier to explore the latent space of chair-like cats, and produce a large grid of generated designs to select from.

Exploring the latent space of generated chair-cats.
Exploring the latent space between different objects can potentially enable creative designers to find interesting intersections and relationships between different drawings.
Exploring the latent space of generated sketches of everyday objects.
Latent space interpolation from left to right, and then top to bottom.
We can also use the decoder module of sketch-rnn as a standalone model and train it to predict different possible endings of incomplete sketches. This technique can lead to applications where the model assists the creative process of an artist by suggesting alternative ways to finish an incomplete sketch. In the figure below, we draw different incomplete sketches (in red), and have the model come up with different possible ways to complete the drawings.
The model can start with incomplete sketches (the red partial sketches to the left of the vertical line) and automatically generate different completions.
We can take this concept even further, and have different models complete the same incomplete sketch. In the figures below, we see how to make the same circle and square figures become a part of various ants, flamingos, helicopters, owls, couches and even paint brushes. By using a diverse set of models trained to draw various objects, designers can explore creative ways to communicate meaningful visual messages to their audience.
Predicting the endings of the same circle and square figures (center) using various sketch-rnn models trained to draw different objects.
We are very excited about the future possibilities of generative vector image modelling. These models will enable many exciting new creative applications in a variety of different directions. They can also serve as a tool to help us improve our understanding of our own creative thought processes. Learn more about sketch-rnn by reading our paper, “A Neural Representation of Sketch Drawings”.

Acknowledgements
We thank Ian Johnson, Jonas Jongejan, Martin Wattenberg, Mike Schuster, Ben Poole, Kyle Kastner, Junyoung Chung, Kyle McDonald for their help with this project. This work was done as part of the Google Brain Residency program.

Let's block ads! (Why?)

Read the whole story
Lythimus
38 days ago
reply
I want to see cat + box + cyanide - cyanide.
Share this story
Delete

Blizzard hints Nintendo Switch may not be powerful enough for Overwatch

1 Comment

These are the kinds of graphics that would require Blizzard to "revisit performance" before a Switch port, apparently.

Since well before the March launch of the Nintendo Switch, observers have been wondering whether the system would get robust software support from major third-party publishers or if it would instead have to lean more heavily on Nintendo exclusives and games from smaller independent developers. Now, new comments from Overwatch director Jeff Kaplan seem to confirm how porting modern games to such low-powered hardware could be a difficult bottleneck to get through.

In a Reddit AMA thread last month, Kaplan vaguely said that "getting [Overwatch] on the Switch is very challenging for us, but we're always open-minded about exploring possible platforms." In a follow-up interview with the UK's Express newspaper published today, Kaplan elaborated. "I think the problem is we've really targeted our min spec in a way that we would have to revisit performance and how to get on that platform," he said.

That's a striking admission, especially considering that the nearly year-old Overwatch doesn't exactly require a top-of-the-line PC to play; Blizzard asks for an Intel Core i3 processor and GTX 460 or better at a minimum. The game's strong visual design makes it technically playable at extremely low resolutions too, which could make a port workable even if a full 1080p image couldn't be achieved on the Switch.

Read 4 remaining paragraphs | Comments

Read the whole story
Lythimus
42 days ago
reply
What they mean is they don't feel like optimizing it for the Switch.
Share this story
Delete

Scientists have stopped retiring, US sees its researchers aging

1 Comment

Enlarge (credit: BRICK 101)

The population of people doing science is increasingly older, and fewer young people are establishing careers in the field. A recent paper published in PNAS finds there are two main factors contributing to the aging STEM workforce (science, tech, engineering, and math). The first is that a large majority of current scientists come from the baby boomer generation—now ages 50 to 70—was sizable. The second main contributor is that in 1994, universities eliminated mandatory retirement, so many older scientists continue to work long past traditional retirement ages.

This trend could have severe consequences, as science may end up lacking the diverse perspectives needed for creative solutions, and there will be fewer qualified individuals to step up when the boomers finally do retire.

An aging cohort

The authors of this PNAS paper use data from the National Science Foundation and the US Survey of Doctorate Recipients to examine age-related demographic trends in STEM PhD recipients. Between 1993 and 2010, the authors saw a decline in scientists ages 35 to 53, and a rise in scientists older than 53. This shift reflects the aging of people currently working in STEM fields rather than a change in the population of researchers.

Read 9 remaining paragraphs | Comments

Read the whole story
Lythimus
53 days ago
reply
Secretly, all scientists have been replaced by gigantic grant machines. One doesn't retire machines if it's still operating.
Share this story
Delete
Next Page of Stories