NiftyCent
techie
10h | Jan 27, 2021, 7:20:40 PM
Elastic powers Shell’s flexibility to thrive in the energy sector
/ng/elasticsearch

This post is a recap of a presentation given at ElasticON 2020. Interested in seeing more talks like this? Check out the conference archive.
Shell International knows that it takes cutting-edge technology to thrive in the competitive, global energy industry. With projects around the world, in both renewable and non-renewable energy, Shell must always have insights into the future. From determining expected output to predicting equipment failures, there's no room for guessing in an industry where downtime is unacceptable. 
This is why Shell is a part of the Open Subsurface Data Universe (OSDU), and why they use Elasticsearch for the analysis of a range of geospatial, full-text, and numeric data that is critical in the energy space.
During his ElasticON Global presentation, Johan Krebbers, general manager of Digital Emerging Technologies and VP of IT Innovation at Shell International, spoke to the importance of the flexibility that Elasticsearch provides the energy giant — including its ability to be cloud-native or on-prem, and its ability to be used with several hosting providers to enable Shell to comply with data retention regulations imposed by governments throughout the world.
What’s more, because of its capability to search and analyze so many different data types, Elasticsearch is at the heart of Shell’s observability, machine learning (ML), machine vision, and natural language processing solutions.
Predicting failure, increasing uptime According to Krebbers, the best time to replace equipment is before it fails. If you wait until it’s too late, the cascading effects can drive up repair costs, increase profits, and impact revenue and growth. By leveraging the real-time ingest, ML, and observability capabilities of Elastic, Shell is able to use predictive modeling to replace or repair machinery before it fails. 
"We use ML for predictive maintenance of our facilities," says Krebbers, who is charged with bringing new technologies to Shell. "You collect the real-time data. You have the ML models. And then [you] start predicting when is a pump going to fail? When is a compressor going to fail? If you can predict failure, you can predict downtime. And downtime always costs you money. So you want to increase your uptime.”
Another way to keep overall costs down is to make sure resources aren't wasted. That’s why Shell uses a machine vision solution with Elasticsearch at its core to gain insights into potential leaks throughout their global infrastructure. 
Detecting spills, leaks, and emissions with robots "Machine vision plays an important role in leak detection and emission detection," says Krebbers. "You have robots driving around with cameras collecting spillage, leakage, emissions. [Then] you bring it into a cloud environment, apply machine visions and start looking for videos of leakage and spillage. And when there's an issue, you can immediately raise that with appropriate staff."
Going beyond detecting leaks, spills, and emissions, Shell must always be on the hunt to harvest new energy sources. 
As part of their mission to bring more resources to market, Shell employees are embracing and demanding natural language processing capabilities with Elasticsearch as the prime search engine. Many Shell experts, for example, are using natural language processing to quickly search through mountains of the company’s mission-critical subsurface data related to wells, development, and exploration to keep Shell competitive.
Watch the full conversation with Shell International to learn more about how Shell increases uptime, detects emissions, and surfaces exploration data faster with Elastic.
www.elastic.co/blog/elastic-powers-shell ...

Elastic powers Shell’s flexibility to thrive in the energy sector
Learn how Shell embraces Elastic to gain insights into the future to predict equipment failure, leaks, and to increase uptime and comply with data retention regulations across the globe.
| | |
techie
13h | Jan 27, 2021, 4:21:09 PM
Life @ Elastic | Know the Role: Support
/ng/elasticsearch

If you’ve read anything about our culture as a distributed company, or seen other articles about roles at Elastic such as what it’s like in Sales Development or Field Operations, you know we do things a little bit different from the rest.The way we do Elastic Support is no different. Support is offered at various levels of subscriptions, but no matter your level, our support team is curious and works hard to provide solutions to the most unique use cases.To give potential Elasticians out there insight into what a role in support at Elastic is like, we spoke with three of our support engineers — Val Guzman, Inbar Shimshon, and Rafi Estrada — to discover a little bit about who they are, how they ended up at Elastic, and to learn more about what their roles entail.Val Guzman, Senior Support EngineerHow long have you been at Elastic and where are you located?I've been at Elastic for two and a half years and I'm based in Portland, Oregon.What do you like to do outside of work?I do ballet. That's my main activity that I do outside of work. I do that and I also play piano and read. A wide array of things!How did you get into technology?I pursued dance pretty seriously until college. My dad wanted me to do something practical. Computer science seemed like the most interesting thing to me, and luckily I was naturally aligned with it. It wasn’t something that came to me naturally, but it was so different from what I’d been doing before that I found it really interesting. In school I would have a programming problem, and I would break it down into little pieces, and three hours later I was sitting in the same spot. It just worked with how my brain functions.Did you start right away in technical support once you graduated college?Yes, right off the bat. I got an entry level position at a software company.How did you end up at Elastic?I was working for a different company that also had a search function in their product. They had a Lucene back end and I found myself drawn to those search cases, really wanting to learn more. I saw that Elastic was looking for a support engineer and I thought that sounded like a perfect opportunity.What’s your role like at Elastic?It’s a mix of being a customer service representative and a detective. Like if Sherlock Holmes had to be customer facing. You have to find where issues originate, which is often in weird places or ways that you’d never expect. While doing all that you have to be friendly and helpful and working in a way that avoids confrontation in often frustrating situations.What motivates you to get up in the morning for work?I take my job really seriously. People are waiting for me to fix their problem. They’ve submitted a support ticket and they’re waiting on me to solve it. But I also really like the team — we have a collective sense of humor that’s hard to explain, but I love it.How do you define success in your team?We define our success in how successful our customers are.What do you think is special about Elastic, something that potential candidates might want to know about?I think it's really cool to work for Elastic. Honestly, I'm a bit of a fan girl. In support, you get to play around with a lot of things. What the Elastic Stack can do is so vast, you’re always learning something new. When a ticket is filed with a strange problem, it’s up to you as the support engineer to get to the root cause and help the customer.Inbar Shimshon, Support Engineer IHow long have you’ve been at Elastic?I’ve been at Elastic for one year and I’m currently based in Tel Aviv, Israel.What do you like to do outside of work?Raising twin girls I have limited time but when I do have some time alone I love to boulder (climb) and run, play my guitar, and play my Playstation.How did you get into technology?I stumbled into high tech pretty much right after I finished my degree in psychology.How did you end up at Elastic?As a support engineer I’ve always ended up working with one or more Elastic products — either Elasticsearch or Kibana. I always loved working with the diversity of the products so when I got the opportunity to join Elastic I didn't think twice.What’s your role like at Elastic?As a support Engineer I help identify, debug, and fix issues people may encounter when either setting up their Elastic Stack (be it in Cloud, on-prem, etc) or help identify and resolve any issues or bugs after the initial set-up they may experience throughout the stack and its features.As for how this contributes to Elastic’s success, I believe support is the heart of any company. When a product or update gets released, the first team to hear about any pain points or issues will be support. The impact support has is therefore very important both for the clients experience as well as for internal feedback on improvements, feature requests, etc.What motivates you to get up in the morning for work?I very much love the complexity of the puzzles I get to solve throughout the day. Elastic has so many products and features, it’s kind of dazzling really, but it really does keep you on your toes. What I love is that no issue is ever really the same, so every day here feels like I learned something new — and that to me is a great motivator!Outside the amazing product and challenging puzzles you get to solve as a support engineer here, what drew me to Elastic the most was the culture and the company’s mindset. it’s simply amazing to work with such a diverse group of people from all over the world. I may be working from home but I definitely do feel connected to the people I work with.How do you define success in your team?This will vary per person depending on their initial skills set and goals or ambitions. There is no one measure of success that fits all, which is what I love about Elastic.What do you think is special about Elastic, something that potential candidates might want to know about?Working at Elastic is not just working with the Elastic Stack — its understanding cloud infrastructure as a whole across other platforms and helping integrate them with Elastic. I know every tech company probably says this but at Elastic, you really work on the most cutting edge technology in terms of cloud engineering. Working here means you will broaden your skills. The opportunity to learn is endless. Really! I speak to engineers who have been here for four plus years and they will happily admit that there is still so much they can learn.Rafael Estrada Maya, Support EngineerHow long have you been at Elastic and where are you located?I’ve been at Elastic for a little over one year. I’m currently located in Manabí, Ecuador.What do you like to do outside of work?I’m heavily into activism, and I’m specifically interested in activism around racial justice.How did you get into technology?I've done support my entire life. My dad is a doctor. Specifically, a radiologist. His job involves using up-to-date technology. And so he was probably one of the first people to have a computer in my hometown, which is very small. So from a very early age I was already messing around with computers and supporting his equipment.How did you end up at Elastic?After working for Palm Pilot and a company called Convergys that outsourced support for a while, I ended up at another support outsource company. They were paying OK, but it was a situation where people were being taken advantage of. I was there for about two years before the work environment got a bit tough. One of my friends there, who is now one of the senior techs here in support, applied at Elastic. He urged me to also apply, and I did and was lucky to get the job.What’s your role like at Elastic?What we do is collaboratively “suffer” through the joy of releasing a great product. There are so many moving parts in what we release that we’re there to help customers if or when they spot a problem or need help with a particular, maybe unusual use case. Usually a support role involves a lot of troubleshooting of a particular known problem with a product and you need to figure out what's wrong with it and fix it, right? With the Elastic Stack, the versatility of the product requires that we find creative solutions for obscure issues.What motivates you to get up in the morning for work?The folks in support are amazing. And I’m always learning new skills. Today included.How do you define success in your team?Learning together along with the customer. We’re not figuring out problems, we’re discovering solutions. It goes beyond doing simple support and troubleshooting — it’s about discovering what the product is really capable of.What do you think is special about Elastic, something that potential candidates might want to know about?I’ve worked with a lot of companies with a code of ethics but there’s a lot of problematic things going on because of the way these corporations are structured. At Elastic, we’re lucky that isn’t the case. There’s a great amount of authenticity within people, and they’re encouraged to share that authenticity with others. Even during the pandemic when meetups aren’t possible, there’s a great sense of camaraderie among the support team over Slack and using our other distributed tools.Interested in joining a company with a Source Code to live by? We’re hiring. Check out our teams and find the right career for you! Want to read more about life at Elastic? Read more on our blog!
www.elastic.co/blog/culture-life-at-elas ...

Life @ Elastic | Know the Role: Support
Curious what a role in support is like at Elastic? Hear from some of our Elasticians on their career path, how they found Elastic, and why a support role with us is a little bit different from anywhere else.
| | |
techie
2d | Jan 26, 2021, 4:20:45 PM
How to export and import Timelines and templates from Elastic Security
/ng/elasticsearch

When performing critical security investigations and threat hunts using Elastic Security, the Timeline feature is always by your side as a workspace for investigations and threat hunting. Drilling down into an event is as simple as dragging and dropping to create the query you need to investigate an alert or event.
Persisting as you move throughout the Elastic Security app, you can add items from tables and histograms on the Overview, Detections, Hosts, and Network pages — as well as from within the Timeline itself. A Timeline can collect data from multiple indices to empower investigation of complex threats. Auto-saving ensures that the results of your investigation are available for review by other analysts and incident response teams.
Timeline templates, on the other hand, filter out potentially noisy alerts generated by rules, and are important to ensure all team members are looking at potential threats through the same lens.
Elastic Security now supports exporting Timelines and Timeline templates from one Kibana Space or instance to another, enabling easy sharing and more effective collaboration between team members.
Sharing a TimelineTo share a Timeline, navigate to the Timelines tab, select one or more Timelines, then select Bulk actions > Export selected.

From here, an ndjson file is downloaded. In the ndjson file below, we can see that each Timeline in the file is represented in a single, minified line containing the required information for creating a Timeline. View the reference of each field.

Now that we have a model of our Timeline, we can also create a Timeline by importing an ndjson file. Before importing the ndjson file, edit it in a text editor and replace savedObjectId with an empty string, which is the reference used to check whether the Timeline exists or not. If leaving the existing savedObjectId results in a failure, the Security app assumes you are updating the existing Timeline. However, if updating the Timeline by importing an ndjson file is not supported; use the Kibana UI instead.
Sharing a Timeline templateYou can share a Timeline template by exporting and importing it using the same method described above. The Timeline template model is the same as a Timeline, but the filters can be different, as explained in our documentation.

The templates displayed above with a disabled checkbox are prebuilt Elastic templates. While you cannot edit or export these, you can still duplicate them to customize your needs by creating a custom template that you can edit and export.
Creating a new template by importing an ndjson file follows the same procedure as importing a Timeline. The only difference is that you can update a template by importing an ndjson file, whereas this function is not supported to update a Timeline. Update the template you just exported in a text editor, leaving savedObjectId with the same value. Then, find the templateTimelineVersion field and bump the numeric number manually. This is to confirm the change and avoid any failure.
Ready to start sharing Timelines and Timeline templates with your team members? Learn more by visiting our Elastic Security documentation.
New to Elastic Security? Experience our latest version on Elasticsearch Service on Elastic Cloud.

www.elastic.co/blog/how-to-export-import ...

How to export and import Timelines and templates from Elastic Security
Elastic Security now supports exporting Timelines and Timeline templates from one Kibana Space or instance to another, enabling easy sharing and more effective collaboration between team members.
| | |
techie
7d | Jan 21, 2021, 4:21:33 PM
Personalizing Elastic App Search with results based on search history
/ng/elasticsearch

With Elastic App Search, you can add scalable, relevant search experiences to all your apps and websites. It offers a host of search result personalization options out of the box, such as weights and boosts and curations. You could also add a these documents might interest you feature, which would surface additional content for users, similar to documents they’ve previously searched for. This post walks you through the process of creating this capability using the robust App Search APIs.

Building the search client
The search client is built with the frontend application as usual, except for two additional requirements (apart from creating the actual suggestion views):


Tag each analytics event with a user ID.

For example, for each query and click, you’d send an additional analytics tags parameter:

curl -X GET '154d5f7d80774345fg92c8381891faf7.ent-sea ... \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer search-soaewu2ye6uc45dr8mcd54v8' \
-d '{
"query": "everglade",
"analytics": {
"tags": ["UNIQUE_USER_ID"]
}
}'

When a list of the suggested results is needed (search request, page load, etc.), fire a request to the external controller.

Building the external controller
The external controller is the backend service. You would have to build this to generate the query you can use to populate a list of documents based on that user’s past searches. On request, the external controller would need to do the following:


Get the terms the user had searched for previously:

Call the App Search analytics API for a list of the top n queries over m time range filtered by that user ID as a tag. Here’s an example that returns the top 20 queries in the last two months of 2020 for the user tagged UNIQUE_USER_ID.

curl -X GET '154d5f7d80774345fg92c8381891faf7.ent-sea ... \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer private-xxxxxxxxxxxxxxxxxxxxxxxx' \
-d '{
"filters": {
"all": [
{
"date": {
"from": "2020-10-31T12:00:00+00:00"
"to": "2020-12-31T00:00:00+00:00"
}
}, {
"tag": "UNIQUE_USER_ID"
}
]
},
"page": {
"size": 20
}
}'

(Optional) It’s possible to exclude documents that have been found through search results if, for example, you’d like to promote content or products that are more likely to be new to the user. Find the documents the user had clicked on and to exclude them:

Call the App Search analytics API for a list of the clicked documents filtered by user ID.

Generate suggested documents:

Issue a multiple search query to the App Search search API using the search terms generated from step 1.

curl -X POST '154d5f7d80774345fg92c8381891faf7.ent-sea ... \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer search-soaewu2ye6uc45dr8mcd54v8' \
-d '{
"queries": [
{"query": "california"},
{"query": "florida"}
]
}'

(Optional) Add a filter to that query to exclude the documents the user has already clicked on (generated from step 2).
Return the results of that query to the client.


FAQ and additional considerations
Here’s a list of questions and other things to consider as you build.

Can I use this for other segmentation beyond just individual users?
Yes! You can use this in whatever segmentation method you prefer. The tags are the key. Tags use strings that you define, so that could be by user, by geographic region, or any other cohort you can define based on what you know about the user. 

But remember, those tags need to be defined by search event and by click event. If you choose to change it up in the future and you’re not logging that data you’ll need to start over or infer cohorts otherwise.

What if I want to display search results based on some other arbitrary user data I own?
That’s great! As long as you can turn that data into query terms, you can modify the external controller to include those search results, too.

How can I tune this feature?
Outside of your existing relevance tuning settings, you can sharpen the results in a few ways:


Limit the number of user queries returned more strictly
Limit the time range of user queries returned more strictly
Limit the total results returned from the multi-search query

Why can’t I do this on the front end?
You could if your client has a list of the user’s queries and optionally, documents on hand — for example with a cookie. Just don’t forget that private keys are required for analytics API access, which you would never want to expose.

Next stepsIf you’d like to experiment with building this search history-based feature, you can spin up a free trial of App Search on Elastic Cloud (or you can download and self-manage). If you have any questions or would like to let us know how your project is going, please drop us a line in our discussion forums.
www.elastic.co/blog/personalizing-elasti ...

Personalizing Elastic App Search with results based on search history
Elastic App Search ships with a robust set of powerful, flexible APIs. Find out how to use them to build a search history-based feature for your users, personalizing their results with a "These documents might also interest you" functionality.
| | |
techie
7d | Jan 20, 2021, 8:20:58 PM
Community organizer spotlight - January 2021
/ng/elasticsearch

Community is at the heart of everything we do at Elastic, and we wouldn’t be able to have such a vibrant and active community without our user group organizers. Each month we want to highlight some of our globally distributed user group organizers to get to know them better, learn about their Elastic stories, and understand their motivation for being involved in the Elastic community. We’ll also highlight any tips they can share for hosting successful meetups. This month, we are delighted to showcase a few of our organizers in Latin America (LATAM).
João Neto, Goiânia organizer
João Neto wears many hats as an Elastic community organizer and speaker, but he’s quick to point out his sense of humor. “I like to imagine myself at the age of 40, making 'uncle jokes' (the Brazillian equivalent to dad jokes) at Christmas dinner.” João is also passionate about the “nerdy” stuff and spending time with people who share similar interests, which is why he became an Elastic community organizer in 2019.
Tech wasn’t his first choice, however.
“I wanted to apply for a tourism degree in college, but I couldn’t get into the right classes. I ended up taking a course in IT instead. At that time I didn’t even know how to turn on a computer! But thank God I took that course because technology became a hobby and a passion in my life.”
In 2017, João started working on a project at his former job that used Elastic. The project required replacing a log forwarder that had been discontinued with something more updated. João says that the project was a funny way to start with using Elastic.
“I kind of got an upside-down start using the Elastic Stack. I started with Beats, then I went to Logstash, then Elasticsearch, and finally Kibana. In the end I got great results on this project and was hooked.”
When asked what cool projects he’s working on at the moment, João said he’s working on several. “Each one cooler than the last! I'm working on data indexing, observability, and safety projects at work, and I also develop projects on my own to improve my skills and help the community.”
When asked what motivated him to become an Elastic user group organizer, João said he was moved to help people in the same way he was helped by the community when starting his Elastic journey. “I almost see it as a call of duty. If you learn something and neglect to pass it on, that’s knowledge that dies with you. Not cool. Knowledge is meant to be shared!”
João recommends getting involved with the community whether or not you have a specific project to share. He says there’s nothing too simple to showcase to the community — everyone brings their specific vision and discipline to the problem, which is incredibly valuable.
In his free time João likes to hang out with his family, study, travel, and go fishing.
Felipe Queiroz, Brasil organizerFelipe Queiroz was born in São Paulo, Brazil, where he still lives today. He is currently finishing a degree in Computer Science and works as a Data Lead Tech at Accenture. In his spare time he works on a development project about Elastic called Tech Lipe, which can be found on Medium and YouTube.
Felipe’s Elastic story began in 2016 when he decided to give a presentation to a class of apprentices for his first course project on big data. Felipe used Kibana as a data analysis tool and after playing with it, he never turned back. In 2017 he started working as an analyst, and one of the tools he was tasked to develop used the Elastic Stack. He was thrilled to work on it, and Felipe soon became a proud Elastic Certified Engineer and Analyst, as well as an Elastic Ambassador in Brazil, Elastic Gold Contributor in 2020, and organizer of the Elastic virtual community in Brazil.
In addition to the challenges of his current job, which is very Elasticsearch focused, Felipe has been studying Watcher dynamics and how to optimize it for scale production (he plans to talk about it at the next Elastic global community event). Felipe has also invested a lot of time (and money) to understand and acquire good microphones, lighting, and video editing to improve and deliver more and more content on the Tech Lipe channel on YouTube!
Felipe says he was motivated to become an Elastic user group organizer when the pandemic started. With everyone staying in their respective homes, Felipe wanted to do something to help and contribute in some way. Having already worked on articles and videos for Tech Lipe, he decided to bring that same high-quality to the Elastic community. His first virtual workshop, From Zero to Hero with Elastic, was born when Filipe and his great friend Anselmo Borges created five days of free, documented, recorded, and high-quality content. This content reached more than 700 people live, and now has more than 5000 views. After that success, and a conversation with Priscilla Parodi from Elastic, Felipe understood that with the tremendous success of the event there was an opportunity to create a virtual community group for Brazil and bring more digital content to users.
When asked for any tips for people looking to organize an event or meetup, Filipe recommends jumping right in.
“I used to think that there were requirements for presenting a talk or organizing a meetup, and I confess that this was my biggest mistake. What a wasted opportunity to share my knowledge with the community and help out some fellows — all because I was being hesitant! What I’ve learned since is that regardless of your career level, if you believe that passing that content on can add to someone's life, don't hesitate to do it. You’ll be surprised with the reward of having positively impacted the life of someone else.”
In addition to projects that involve Elastic (community, articles, and videos), Felipe likes to spend his free time with his girlfriend, friends, and family. He’s also been spending a lot of time playing the drums on the old Guitar Hero game.
Adam Brandizzi, Brasília organizerAdam Brandizzi lives in Brasília, Brazil with his wife and son. Adam works for Liferay, a Californa-based company that makes enterprise portals and digital experience platforms. The Latin American headquarters for Liferay are in Recife, Brazil, where he and his family lived for many years, but now he works remotely from his home city.
Adam started working with Elastic at Liferay where, after a long evolution, they started using Elasticsearch for their text search needs. As part of the team working on search components in their portals and platforms, Adam now works with Elastic a lot.
When asked what cool projects he’s working on at the moment, Adam is enthusiastic about the work he does at Liferay.
“I'm fascinated by a feature our team built (mostly without me), called Result Rankings. With it, our customers can add, remove, or reorder entries in a set of results, for all users, by dragging and dropping. We’re now focused on developing something called the Elasticsearch Experience, which provides APIs to customize our search results even more via the GUI. I'm proud of our Synonyms Editor, which I worked on a lot myself, which allows customers to add and remove synonym sets graphically to Elasticsearch, and a Learning-to-Rank (ML) plugin I built some time ago.”
Adam was inspired to become an Elastic user group organizer when he started working remotely, missing the interactions he used to have with colleagues and other developers. “I went to Elastic{ON} Tour São Paulo 2019 and learned that Elastic was looking for community organizers,” said Adam. “I was scared at first but then I met other candidates, as well as Priscilla Parodi (who at the time was the Elastic Community Advocate for Brazil). We had such a great opportunity to reignite the community. I really loved it! Now I take part in many of the Brazil Elastic communities and even organize some.”
When asked for any tips when organizing an Elastic meetup, Adam insists it’s all quite simple.
“Many companies want to host meetups. Or at least pay for the coffee. COVID-19, to all its awfulness, showed that online events and communities are really amazing, and have their own advantages. Sometimes you only need to create a Slack, Telegram, or WhatsApp group and you’re ready to go. If it is too hard, or frustrating, you are probably doing it wrong.”
In his spare time Adam likes reading, watching documentaries, writing, and programming as a hobby. He’s also trying to pick up some new languages during the pandemic. “Right now I'm trying to get my German to a usable level and am starting to learn Chinese. I'm not there yet but the fun is in the journey anyway.”
Victor Villas Bôas Chaves, São Paulo OrganizerVictor is a software engineer working with search at Gupy, one of the biggest and fastest growing HR tech companies in Brazil. He began working with Elastic products when a Postgres instance he was working on at Gupy kept falling over because of an ILIKE query. Desperately needing a scalable search tool, Victor and his team did a quick and dirty migration, dumped huge JOINs into Elasticsearch with no prior mapping decisions, and away they went.
Two years after that first Postgres dump into Elasticsearch, Victor and his team are still refining the system. Victor was inspired to become an Elastic community organizer to help him grapple with all the possibilities the Elastic Stack has to offer.
“Elasticsearch is a powerful beast, and it's pretty often people's first distributed system and non-RDBMS large scale storage solution. There's a lot to learn about its query system and operation. Often, you have to learn it fast, and deeply, because scalability is a ticking bomb and there's often little space for mistakes on such big data systems. The community is the way forward: if I can't learn by my own mistakes and experiments, I have to learn with others. Solidifying that knowledge by teaching others is also a great opportunity to learn more.”
When asked if he has tips for people interested in organizing or speaking at Elastic meetups, Victor says anyone can do it.
“It's not a matter of technical knowledge. You probably already know enough to teach. The effort is on making it interesting and engaging for the right audience. On the organizer side of things, it’s your job to encourage people and convince them they are qualified to give it a try.”
Victor says he doesn’t have many hobbies, but he is currently on a deep dive into the world of coffee, aiming to obtain barista status. Aside from coffee, he’s also a dedicated player of Elite Dangerous, a space flight simulation game.
If you are interested in becoming a user group organizer for an Elastic user group in your town, please reach out to meetups@elastic.co and we’ll be happy to assist you on your journey. For upcoming virtual meetups, check out our Elastic Community website.

www.elastic.co/blog/community-organizer- ...

Community organizer spotlight - January 2021
Community is at the heart of everything we do at Elastic. Each month we’re highlighting a few of our user group organizers. Meet our January heroes.
| | |
techie
8d | Jan 20, 2021, 4:20:41 PM
How to map custom boundaries in Kibana with reverse geocoding
/ng/elasticsearch

Want to create a map of where your users are? With the GeoIP processor, you can easily attach the location of your users to your user metrics. 
Right out of the box, Kibana can map this traffic immediately by country or country subdivision:

Plus, the new User Experience app for Elastic APM automatically creates maps based on monitoring data:

But what if you want to take this one step further and create maps with different regions?
Custom regions: metro area, proximity to IKEA, anything...Elastic Maps come with a lot of great region options so you can get started quickly, but it also offers the ability to easily map your own regions. You can use any boundary data you'd like for this, as long as you have source data that contains a longitude and latitude. 
For this example, suppose we use GeoIP, which is built into Elasticsearch. GeoIP is a common way of transforming an IP address to a longitude and latitude. 
GeoIP is roughly accurate on the city level globally and neighborhood level in selected countries. It’s not as great as an actual GPS location from your phone, but it’s much more precise than just a country, state, or province. So there’s a lot of resolution between the precision of the longitude and latitude from GeoIP and the default maps you get in Kibana.
This level of detail can be very useful for driving decision-making. For example, say you want to spin up a marketing campaign based on the locations of your users or show executive stakeholders which metro areas you see are experiencing an uptick of traffic.That kind of scale in the United States is often captured with what the Census Bureau calls the Combined Statistical Area (CSA). It is roughly equivalent with how people intuitively think of which urban area they live in. It does not necessarily coincide with state or city boundaries. 
This subdivision is central to many of the Federal Government’s policies, such as making cost-of-living adjustments to fiscal benefits. CSAs generally share the same telecom providers and ad networks. New fast food franchises expand to a CSA rather than a particular city or municipality. Basically, people in the same CSA shop in the same IKEA.
Assigning a spatial identifier to a feature based on its location is called reverse geocoding or spatial joining. It’s one of the most common operations in geographic information systems (GIS). 
In the Elastic Stack, this reverse-geocoding functionality resides within Elasticsearch via the enrich processor. Here we're going to use Kibana to manage these processors and then create maps and visualizations. In the tutorial below, we will use CSA boundaries to illustrate reverse geocoding. Reverse geocoding with the Elastic Stack Step 1: Indexing the geospatial dataThis will probably be the most custom part of any solution, so we’ll skip it 😜. Most integrations can rely on the GeoIP processor to transform an IP location into a geo_point field.
Whatever process you have used to index your data, you’ll have a document using the ECS schema that will contain two sets of fields created by the GeoIP processor:  

destination.geo.* for where requests are going (usually a data center)
client.geo.* for the origin of the request, sometimes called origin.geo.*.
The relevant bit here is that *.geo.location field. It contains the longitude and latitude of the device. 
For the rest of this tutorial, we’ll use the kibana_sample_data_logs index that comes with Kibana, since that’s quicker to follow along with. The critical part for reverse geocoding is the presence of the longitude/latitude information and less how that longitude/latitude field was created. 
Step 2: Indexing the boundariesTo get the CSA boundary data, download the Cartographic Boundary shapefile (.shp) from the Census Bureau’s website. 
To use it in Kibana, we need it as a GeoJSON format. I used QGIS to convert it to GeoJSON. Check out this helpful tutorial if you'd like to do the same.
Once you have your GeoJSON file, go to Maps in Kibana and upload the data using the GeoJSON uploader. 
Zoomed in on the result, we get a sense of what exactly constitutes a metro area in the eyes of the Census Bureau. I added some tooltip fields using the Tooltip Fields in the layer editor.

This upload created our CSA index containing the shapes we’ll use for reverse geocoding.
Step 3: Reverse geocoding with a pipelineIn order to create our pipeline, we first need to create the reverse geocoder. We can do this by creating a geo_match enrichment policy. 
Run the following from Dev Tools in Kibana:
PUT /_enrich/policy/csa_lookup
{
"geo_match": {
"indices": "csa",
"match_field": "coordinates",
"enrich_fields": [ "GEOID", "NAME"]
}
}
POST /_enrich/policy/csa_lookup/_execute
This creates an enrich policy called csa_lookup. It uses the coordinates field which contains the shapes (it has a geo_shape field-type). The policy will enrich other documents with the GEOID and NAME fields. It also automatically attaches the coordinates field. The _execute call is required for initializing the policy.
Then we’ll integrate this reverse-geocoder into a pipeline.
PUT _ingest/pipeline/lonlat-to-csa
{
"description": "Reverse geocode longitude-latitude to combined statistical area",
"processors": [
{
"enrich": {
"field": "geo.coordinates",
"policy_name": "csa_lookup",
"target_field": "csa",
"ignore_missing": true,
"ignore_failure": true,
"description": "Lookup the csa identifier"
}
},
{
"remove": {
"field": "csa.coordinates",
"ignore_missing": true,
"ignore_failure": true,
"description": "Remove the shape field"
}
}
]
}
Our pipeline consists of two processors:

The first is the enrich processor we just created. It references our csa_lookup policy. It creates a new field csa that contains the CSA identifiers (GEOID, NAME) and the CSA geometry (coordinates).
The second is a remove processor that removes the CSA geometry field. (We don’t need it since we are only interested in the identifiers).
Step 4: Running the pipeline on all your documentsNow that the pipeline is created, we can start using it. And a great thing about pipelines is you can run them on your existing data.
With _reindex, you can create a new index with a copy of your newly enriched documents: 
POST _reindex
{
"source": {
"index": "kibana_sample_data_logs"
},
"dest": {
"index": "dest",
"pipeline": "lonlat-to-csa"
}
}
With _update_by_query, all the documents are enriched in place:
POST kibana_sample_data_logs/_update_by_query?pipeline=lonlat-to-csa
Step 5: Running the pipeline on new documents at ingestAll the existing docs are updated. Now we need to make sure we also use this pipeline when indexing new documents:
POST kibana_sample_data_logs/_doc/testdoc?pipeline=lonlat-to-csa
{
"geo": {
"coordinates": {
"lon": -85.7585,
"lat": 38.2527
}
}
}
Let's test it out:
GET kibana_sample_data_logs/_doc/testdoc
You can also setup a default pipeline to have this reverse geocoding done for each incoming document by default:
PUT kibana_sample_data_logs/_settings
{
"index": {
"default_pipeline": "lonlat-to-csa"
}
}
Step 6: Creating a mapBack in the Maps app, click Add layer. Then select Choropleth Layer:

We’ll select our CSA -layer (these are the shapes), and join them by the unique GEOID identifier. Then we’ll join the aggregate info from our request index. The join field here is csa.GEOID, which was created by the pipeline.
After changing the default color ramp from green to red and adding some tooltip fields, we can now create our map. In this case, it shows a few hotspots in the Dallas, Indianapolis, and New York metropolitan areas.


Get started todayHopefully this got you thinking about how to use a reverse geocoder. It’s an incredibly powerful tool to create custom maps and gain new insights in your data.  If you're not already using Elastic Maps, try it out free in Elastic Cloud. For any feedback and questions, our Discuss forums are the perfect venue. And if you find yourself breaking the boundaries (ha!) of your old mapping limitations, show us what you made! Connect with us in the forums or @ us on Twitter.
www.elastic.co/blog/how-to-map-custom-bo ...

How to map custom boundaries in Kibana with reverse geocoding
Tired of map boundaries such as zip and area codes? Now you can easily create maps in Kibana with the GeoIP processor in Elasticsearch. Learn about indexing geospatial data, creating and running a pipeline on your documents, and more.
| | |
techie
9d | Jan 19, 2021, 2:20:59 PM
License Change Clarification
/ng/elasticsearch

We've had a few questions about our recent license change to Elasticsearch and Kibana, and while we’ve been updating our FAQ, we wanted to clarify who is affected by this change:

Our on-prem or Elastic Cloud customers will not be impacted.
The vast majority of our users will not be impacted.
The folks who take our products and sell them directly as a service will be impacted, such as the Amazon Elasticsearch Service.
If you're using the products or building an application on top of Elasticsearch and Kibana, our intent is that you won't be impacted. We have been updating our FAQ continuously, based on the questions we’ve been seeing, but if you have any questions that aren’t yet addressed, please reach out to us at elastic_license@elastic.co.
We also wanted to clarify how the dual license works. We moved the Apache 2.0 licensed source code of Elasticsearch and Kibana to be dual licensed under the Elastic License and SSPL. You choose which license to use:

SSPL is well known - millions of people use MongoDB under this license today. We chose this license as an option to make the decision easy for the millions of developers using MongoDB. SSPL, a copyleft license based on GPL, aims to provide many of the freedoms of open source, though it is not an OSI approved license and is not considered open source.
Elastic License is also well known - If you use our default distribution, as millions of others and 90%+ of our downloads over the past 3 years, you already use it and there is no change for you. It is source-available and allows free use, with none of the copyleft aspects of SSPL. The Elastic License does not allow taking the product and directly selling it as a service, like Amazon Elasticsearch Service, redistributing the products, hack the source code to grant yourself access to our paid features without a subscription, or the use of modified versions in production.

The future of the Elastic LicenseAs noted in our FAQ and based on the feedback so far, we're considering ways to further simplify the Elastic License. Our goals align well with the spirit of the BSL, created by MariaDB and also used by CockroachDB, who “... believe[s] this is the best way to balance the needs of the business with our commitment to Open Source” in their excellent blog about their decision to take this approach.
The BSL, endorsed by the OSI founder Bruce Perens, is a simple, parameterized license, which each company can customize to match their needs. It provides the right to copy, modify, create derivative works, redistribute, as long as the "additional rights" parameters are met. We are evaluating an additional rights grant that would allow production use, with only 3 simple limitations:

You may not use the licensed work to provide an "Elasticsearch/Kibana as a Service" offering.
You may not hack the software to enable our paid features without a subscription.
You may not remove, replace or hide the Elastic branding and trademarks from the product. (e.g. do not replace logos, etc).
Then after a period of time, typically 3-4 years, but not more than 5 years, the restrictions lapse, and the source code automatically converts to an Open Source license, in our case Apache 2.0.
To be clear, BSL is not an OSI approved license.
We’re taking our time to get this right - ideally offering a single license that covers both our free and paid features while still being as open as possible is delicate. Especially if it means the code becomes open source after 3-4 years. If we can achieve it safely, we can provide more freedoms for our commercial features, and a simple, single license for our distribution. This is the kind of challenge that is worth working hard for. We are worried about it being abused, from you know who :), so bear with us.
If we decide it is not the right approach, we will consider splitting it into a BSL-based Elastic Community License for our free features and a simplified Elastic License for our paid features.
Our intent is to finalize it by our next release, 7.11, as we mentioned in the blog post, so we would like your feedback! Let us know if this approach would work for your use-case at elastic_license@elastic.co.

www.elastic.co/blog/license-change-clari ...

License Change Clarification
We've had a few questions about our recent license change to Elasticsearch and Kibana.
| | |
techie
13d | Jan 14, 2021, 6:20:49 PM
Elastic Stack 7.10.2 released
/ng/elasticsearch

Version 7.10.2 of the Elastic Stack was released today. We recommend you upgrade to this latest version.
The 7.10.2 patch contains fixes and small enhancements for the stack.
7.10.2 Release Notes
Elastic Stack

Elasticsearch
Kibana
Beats
Logstash

Elastic Enterprise Search

Workplace Search
App Search

Elastic Observability

APM

Elastic Cloud

ECE 2.7.1
ECK 1.3.1

www.elastic.co/blog/elastic-stack-7-10-2 ...

Elastic Stack 7.10.2 released
Elastic Stack 7.10.2 has been released. Read about the updates and bug fixes that have been included in Elasticsearch, Kibana, Beats, Logstash, Enterprise Search and APM.
| | |
techie
14d | Jan 14, 2021, 2:20:38 PM
Doubling down on open, Part II
/ng/elasticsearch

Upcoming licensing changes to Elasticsearch and KibanaWe are moving our Apache 2.0-licensed source code in Elasticsearch and Kibana to be dual licensed under Server Side Public License (SSPL) and the Elastic License, giving users the choice of which license to apply. This license change ensures our community and customers have free and open access to use, modify, redistribute, and collaborate on the code. It also protects our continued investment in developing products that we distribute for free and in the open by restricting cloud service providers from offering Elasticsearch and Kibana as a service without contributing back. This will apply to all maintained branches of these two products and will take place before our upcoming 7.11 release. Our releases will continue to be under the Elastic License as they have been for the last three years.
In recent years, the market has evolved, and the community has come to appreciate that open source companies need to better protect their software to continue to innovate and make the investments required. As companies continue the shift to SaaS offerings, some cloud service providers have taken open source products and provided them as a service without investing back into the community. Moving to the dual license strategy with SSPL or the Elastic License is a natural next step for us after opening our commercial code and creating a free tier, all under the Elastic License, nearly 3 years ago. It is similar to those made by many other open source companies over these years, including MongoDB, which developed the SSPL. The SSPL allows free and unrestricted use, as well as modification, with the simple requirement that if you provide the product as a service, you must also publicly release any modifications as well as the source code of your management layers under SSPL.This change in source code licensing has no impact on the overwhelming majority of our user community who use our default distribution for free. It also has no impact on our cloud customers or self-managed software customers.
Our open originsMy personal journey with open source goes a long way back. In 2005, I open sourced my first project, Compass, to provide a Java framework on top of Apache Lucene while I was building a recipe app for my wife. In the following five years, I invested many weekends and nights working on it, from writing code to helping users with bugs, features, and questions.
I had no idea what I was signing up for, especially with a day job “on the side,” but I fell in love with the opportunity to make such a positive impact — trying to build a great product, but more importantly, a great community around it, through the power of open source.
In 2009, I decided to do it again, and started to write a brand new project called Elasticsearch. I spent many nights and weekends building it, and in 2010 open sourced it. I even quit my job and decided to dedicate my full attention to it. To be there for the users, through writing code, and engaging on GitHub, mailing lists, and IRC.
And when we founded Elastic as a company in 2012, we brought the same spirit to our company. We invested heavily in our free and open products, and supported the rapid growth of our community of users. We expanded from just Elasticsearch to Kibana, Logstash, Beats, and now a complete set of solutions built into the Elastic Stack: Elastic Enterprise Search, Observability, and Security.
We have matured the products, fostered vibrant communities around them, and focused on providing the greatest amount of value to our users. Today, we have hundreds of engineers who wake up every day and work to make our products even better. And we have hundreds of thousands of community members who engage with us and contribute to our shared success.
I am proud of the company we built, and humbled by the level of trust that we have earned with our user base. This starts by being open and transparent, and continues with being true to our community and user base in our choices.
Free and open FTWBack in 2018, we opened the code of our free and paid proprietary features under the Elastic License, a source-available license, and we changed our default distribution to include all of our features, with all free features enabled by default.
We did this for a few reasons. It allowed us to engage with our paying customers in the same way we engage with our community: in the open. It also allowed us to build free features that empower our users without providing those capabilities to companies that take our products and provide them as a service, like Amazon Elasticsearch Service, and profit from our open source software without contributing back.
This approach was well received — today, over 90% of new downloads choose this distribution — and has allowed us to make so much of our work available for free while also building a successful company.
The list of improvements under this new free and open, yet proprietary, license, is overwhelming. I am humbled by the amazing progress our team and community has made across all our products, so much so that I would love to share some of them:
We've dramatically improved the speed, scalability, and reliability of Elasticsearch, with a new distributed consensus algorithm and significantly reduced memory usage, in addition to new data storage and compression approaches that have reduced the typical index size by nearly 40% while improving indexing and query throughput. We added new field types for geospatial analysis, and more efficient ways to store and search logs and perform fast, case-insensitive search on security data. In Kibana, we cut load time by 80% and eliminated whole-page refreshes thanks to a multiyear replatforming project, while at the same time introducing an intuitive drag-and-drop data visualization experience with Kibana Lens, key capabilities like dashboard drill-downs, and so much more.
Over the last three years, we also built first-class experiences around our most common use cases. In the security area, we created a free and open SIEM right inside Kibana, with a powerful detection engine that supports simple rules as well as complex correlations via a new query language called EQL in Elasticsearch. We include hundreds of detection rules, which we develop publicly, in collaboration with our community. And we joined forces with Endgame, a leading endpoint security company, and have released powerful malware protection for free as part of the Elastic Agent, our unified, centrally managed observability and security agent for servers and endpoints, with more to come.
In observability, the story is similar. We've built an entire observability suite right inside Kibana — from a live-tail logging UI to an intuitive infrastructure-level view of the key metrics and alerts across your hosts, pods, and containers. And we now have a fully featured APM product with open source data collectors and agents, supporting OpenTelemetry, real user monitoring (RUM), synthetic monitoring, and the recent addition of user experience monitoring.
With Elastic Enterprise Search, we introduced App Search, a layer on top of Elasticsearch that simplifies building rich applications and provides powerful management interfaces for relevance tuning, as well as analytics on how it's being used. We also provide a free Workplace Search product that makes it easy to integrate and search the content sources that you use to run your life or company, like Google Workplace, Microsoft 365, Atlassian Jira and Confluence, and Salesforce.
It is simply amazing that we've been able to build all of these capabilities and provide them for free to our community. It has been humbling to see the level of engagement and adoption around our products and how these new features have helped so many people and businesses succeed. And this was possible because the overwhelming majority of our community chose our default distribution under the Elastic License, where all these features are free and open.
Why change?As previously mentioned, over the last three years, the market has evolved and the community has come to appreciate that open source companies need to better protect their software in order to maintain a high level of investment and innovation. With the shift to SaaS as a delivery model, some cloud service providers have taken advantage of open source products by providing them as a service, without contributing back. This diverts funds that would have been reinvested into the product and hurts users and the community.
Similar to our open source peers, we have lived this experience firsthand, from our trademarks being misused to outright attempts to splinter our community with “open” repackaging of our OSS products or even taking “inspiration” from our proprietary code. While each open source company has taken a slightly different approach to address this issue, they have generally modified their open source license in order to protect their investment in free software, while trying to preserve the principles of openness, transparency, and collaboration. Similarly, we are taking the natural next step of making a targeted change to how we license our source code. This change won't affect the vast majority of our users, but it will restrict cloud service providers from offering our software as a service.
We expect that a few of our competitors will attempt to spread all kinds of FUD around this change. Let me be clear to any naysayers. We believe deeply in the principles of free and open products, and of transparency with the community. Our track record speaks to this commitment, and we will continue to build upon it.
The changeStarting with the upcoming Elastic 7.11 release, we will be moving the Apache 2.0-licensed code of Elasticsearch and Kibana to be dual licensed under SSPL and the Elastic License, giving users the choice of which license to apply. SSPL is a source-available license created by MongoDB to embody the principles of open source while providing protection against public cloud providers offering open source products as a service without contributing back. The SSPL allows free and unrestricted use and modification, with the simple requirement that if you provide the product as a service to others, you must also publicly release any modifications as well as the source code of your management layers under SSPL.
We chose this path because it gives us the opportunity to be as open as possible, while protecting our community and company. In some ways, this change allows us to be even more open. As a follow-up to this change, we will begin moving our free proprietary features from the Elastic License to be dual-licensed under the SSPL as well, which is more permissive and better aligned with our goals of making our products as free and open as possible.
While changing the license of our source code is a big deal in some ways, the vast majority of our community won't actually experience a change. If you are a customer of ours, either in Elastic Cloud or on premises, nothing changes. And if you've been downloading and using our default distribution, it's still free and open under the same Elastic License. If you've been contributing to Elasticsearch or Kibana (thank you!) nothing changes for you either.
We will continue to develop our code in the open, engage with our community, and publish our releases for free under the Elastic License as we have done for the last three years. We remain committed to keeping all of our free features free — we are not making any changes to which features are free and which are available in a paid subscription.
Our belief in the importance of a unified community has never been stronger. This change sets us up to continue to demonstrate our commitment and earn your trust in the future as we have done over the last 10 years.
Resources:
FAQ on the license change
Forward-Looking StatementsThis post contains forward-looking statements that involve substantial risk and uncertainties, which include, but are not limited to, statements concerning the licensing of the company’s code, the market opportunity for software as a service and open source server side software, the benefits of open source innovation, the impact of the licensing model used by the company, our future investment in research and development, and our assessments of the strength of our solutions and products. These forward-looking statements are subject to the safe harbor provisions under the Private Securities Litigation Reform Act of 1995. These forward-looking statements reflect our current views about its plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Although we believe that our plans, intentions, expectations, strategies and prospects as reflected in or suggested by those forward-looking statements are reasonable, we can give no assurance that the plans, intentions, expectations or strategies will be attained or achieved. Actual outcomes and results may differ materially from those contemplated by these forward-looking statements due to uncertainties, risks, and changes in circumstances, including but not limited to those related to: our ability to timely and successfully implement and achieve the benefits of the new dual licensing model; acceptance of the new licensing model by customers and our user community; our ability to continue to build and maintain credibility with the developer community; the effects of competing SaaS services; our ability to maintain, protect, enforce and enhance our intellectual property; the impact of the expansion and adoption of SaaS offerings on open source licensing models; and our beliefs and objectives for future operations. Additional risks and uncertainties that could cause actual outcomes and results to differ materially are included in our filings with the Securities and Exchange Commission (the “SEC”), including our Annual Report on Form 10-K for the fiscal year ended April 30, 2020 and any subsequent reports filed with the SEC. SEC filings are available on the Investor Relations section of Elastic’s website at ir.elastic.co and the SEC’s website at www.sec.gov .... Elastic assumes no obligation to, and does not currently intend to, update any such forward-looking statements, except as required by law.

www.elastic.co/blog/licensing-change ...

Doubling down on open, Part II
Upcoming licensing changes to Elasticsearch and Kibana
| | |
techie
14d | Jan 13, 2021, 8:20:33 PM
Elastic Community Conference updates + CfP extended to January 22
/ng/elasticsearch

Wow! We're only two weeks into January, and we have already received 100 presentation submissions for ElasticCC — more than 20 in APJ (Asia-Pacific/Japan) and almost 40 in both EMEA (Europe/Middle East/Africa) and NASA (North/South America). That's an amazing start, and we still have room for some more. Since we know how hard it is to get started after the holidays, we’re extending the Call for Presentations (CfP) until Friday, January 22, at 23:59 UTC. So if you’ve been meaning to submit a talk but just haven't been able to find the time, now is your chance to join the fun: sessionize.com/elastic-community-confere ...
If this is your first time hearing about ElasticCC, here's a quick rundown. In short, it's a free technical conference from the community, for the community. It's open to developers, practitioners, customers, and partners — we look forward to talks from you all! The virtual sessions will kick off on the afternoon of Friday, February 26, in EMEA and will run through February 27 in APJ, making it a global event.
RSVP today to make sure you don't miss out on any of the great sessions.

As we said in our announcement post, we're reviewing and accepting talks as they roll in. This means we have already accepted some sessions and can give you a sneak peek into what's in store!
Safety firstFor our security crowd, we have talks touching on a variety of topics. If your SOC is more a "party of 1" than a department, you may be interested in Solving small business security problems with Elastic. If you love open source as much as I do, you should check out Elastic + TheHive: An open source dream. And if you're interested in seeing how Elastic Security is being used by others, drop in for Network/asset modelling with Elastic Security (SIEM).
Viva la varietyOne thing I love about the Elastic community is how many unique use cases are out there. From application cases like Boosting music search results based on popularity and user behaviour in Elasticsearch to critical cases like Dealing with system faults in a critical healthcare system, Elasticsearch can be tuned for anything. If you love learning new ways to get the most out of the Elastic Stack, you'll need to attend sessions like Effortless Kibana: From CSV to dashboard in less than 5 minutes! and Elasticsearch & GraphQL: A love letter. And finally, we have sessions showing how Elastic can be used to help you in your day-to-day (non-work) life, like Managing type 1 diabetes with Elastic machine learning features.
Elasticians in the houseElasticians (Elastic employees) are community members just like you, and they have a lot to share, too. From the Elastic Support team, you can learn about Troubleshooting your Elasticsearch cluster like a support engineer — even though we hope you won’t need it. And our Engineering team will take you on a Deep dive into the new Elastic data stream naming scheme.
We all speak ElasticThe Elastic community is global, so it's important to us that ElasticCC reflects languages other than English. In Spanish, you can learn about Elasticsearch and Elastic Enterprise Search (Enterprise Search con Elasticsearch), or switch language and solution and dive into Elastic Observability (Introdução à observabilidade com o Elastic Stack) in Portuguese. If you prefer Chinese, check out Operations and tuning for a petabyte-level Elasticsearch cluster (PB级大规模Elasticsearch集群运维与调优实践) and An implementation of Elasticsearch containerization based on Kubernetes and Docker (基于kubernetes与Docker的Elasticsearch容器化编排部署的实践与应用). There's also Automating Elasticsearch operation using a shell script (Shell Script 로 elasticsearch 운영 자동화 하기) in Korean. And did we almost forget to mention pizza? There will also be a session about Indexing the 🍕 emoji in Elasticsearch to find pizzas (Indexer des Emoji 🍕 dans Elasticsearch pour trouver des pizzas) in French.
It's not a community event without swagWhile we won’t have actual 🍕 at this community event, there will be special challenges to win stickers and swag. So be ready to put your Elastic skills to the test, and save a spot on your laptop for some new deco.
Elastic Community Conference is just around the corner, so here's one more quick reminder:

When: February 26 in EMEA through February 27 in APJ.
Where: community.elastic.co.
Why: Because community is at the heart of Elastic.
How: You can participate in two ways: either submit a session to be a speaker or RSVP to attend and learn.
See you there!

www.elastic.co/blog/elastic-community-co ...

Elastic Community Conference updates + CfP extended to January 22
The Elastic Community Conference (ElasticCC) call for presentations has been extended to January 22. If you're looking for presentation inspiration, read this post to learn about some of the accepted sessions.
| | |