<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Sourceprojects.org]]></title><description><![CDATA[Opensource for Java and More]]></description><link>http://blog.sourceprojects.org/</link><generator>Ghost 5.70</generator><lastBuildDate>Mon, 20 Apr 2026 11:09:41 GMT</lastBuildDate><atom:link href="http://blog.sourceprojects.org/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA["A Series of Unfortunate Events" and moving on]]></title><description><![CDATA[Some wisdom we all have to accept from time to time: 
"Sometimes things just don't work out the way we want them to."]]></description><link>http://blog.sourceprojects.org/2018/05/14/a-series-of-unfortunate-events/</link><guid isPermaLink="false">5af92b3ff4ac603594d63b9f</guid><category><![CDATA[conference]]></category><category><![CDATA[tech talk]]></category><category><![CDATA[opensource]]></category><category><![CDATA[projects]]></category><category><![CDATA[advocate]]></category><category><![CDATA[jobsearch]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Mon, 14 May 2018 08:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Let me start this post with some wisdom we all have to accept from time to time: <em>Sometimes things just don&apos;t work out the way we want them to.</em></p>
<p>Following up is a little write-up of the last couple of months, what happened and what not. It might be just interesting to see what was going on, but I hope it might help a person or two too.</p>
<p><em>TLDR; Since this blog post got way longer than expected, I&apos;m looking for some cool opportunities to help with ;-)</em></p>
<p>Almost 6 months ago I talked about a startup I was working on. A lot of time went into it, especially since I was already working on it for more than a year at that point in time, even though only in my spare time.</p>
<p>I wanted to fix a problem I knew about from my own experience. When people asked me about what it is, I mostly started with &quot;4 years frustration built into hard- and software&quot;. Catchy eh?</p>
<p>Going deeper into the matter of what it was and what it will be when finished, people could see the potential and how it was supposed to help them and their companies. It often concluded into &quot;when can we have it?&quot;. It was very encouraging and promising, it kept me going. Unfortunately it also kept me in a bubble.</p>
<p>That said, I left Hazelcast end of 2017 and went ahead finalizing the last bits and pieces of the business plan. Even before leaving I was looking to collect some money. Some Angels were interested and the rest of the money was supposed to be a short-term bank loan. The European Union has some great subsidized loans for company founders, so it says.</p>
<p>Anyways going into February 2018 we started to figure out it wasn&apos;t all that easy. Applying for the bank loan to get a fully working, almost final looking, prototype wasn&apos;t as easy as all the commercials about the startup culture in Germany being proposed by cities (D&#xFC;sseldorf calls itself the Startup-City, www.startup-city.de) made us (me and my wife) think. Banks are supposed to help companies and especially tech-startups, but guess what: hardware? too risky.</p>
<p>Whatever, one option down, move on to the next. Unfortunately the general problem seems to be that finalizing and producing hardware is pretty expensive.</p>
<p>Thanks to SOMs (System on Modules) and SBCs (Single Board Computers) you can fairly easily put together a prototype that works nicely in just a few weeks, however to build a final product, you will need money. You need to create the PCB, you need to get certifications like FCC (US), IC (Canada), CE (Europe) to just name the most important ones. Bigger market means more certifications ahead.</p>
<p>As we talk about certifications, let&apos;s jump into what all of you might be interested in the most; what the heck did you want to build?</p>
<p>As most of you might know, I worked in developer relations for Hazelcast. That is the obvious stuff like blog posts, social media, the developer communities, conference talks, etc. At least for Hazelcast, however, it also meant conference booth work. As for marketing booth, success is basically leads. The number of people being scanned and hopefully converted into paying customers (in the longer term).</p>
<p>Conferences mostly offer badge scanners right as an add-on to your booth, just as they offer chairs, tables and TVs. There is some trade-off though, as most often these scanners have to be handed back to the conference and you&apos;ll be emailed the scanned people with name and email. If you&apos;re lucky you also might get information like company and phone number.</p>
<p>Apart from the obvious issue, that you don&apos;t know upfront what kind of information you&apos;ll get, there&apos;s also another issue; you have to wait for those information to arrive in your inbox. Statistics on the other hand tell us, you should follow up with a person visiting your booth in less than 24 hours for full potential. Best is, if the person visited your booth on the first day and you followed up by the evening. Offer some additional, meaningful material, also propose to have a further look and if there are any questions, to come by the other day.</p>
<p>But wait, didn&apos;t I just say, you have to hand the scanner to the conference to get the leads? Yes I did, problem found.</p>
<p>In general, from my experience, leads coming from a conferences are often lower quality than leads coming from your own website. One reason why conference leads are often measured by quantity over quality.</p>
<p>But shouldn&apos;t conference leads be higher quality? I mean you talk to the people. You invest time, money and power into those leads. Anyhow you often can&apos;t follow up quickly enough to keep those people excited. You have to realize that attendees see plenty of companies at conferences and, again from my own experience, it&apos;s hard to remember all their names. Great collateral therefore is really important! Keep yourself in their heads!</p>
<p>Anyways, we found the issue, but how to fix it?<br>
Some companies started to use smartphone apps with the possibility to scan badges or put together simple forms. Don&apos;t get me wrong, those apps are a great help, apart from when they&apos;re not. Those apps are either temporary offline storing and synching to the cloud when internet (e.g. WiFi) is available or even worse offline only, which means you have to export the information into a CSV (or similar) file from every single smartphone you used. So information is spread across all used devices, but in general we should be faster than waiting for the end of the conference.</p>
<p>At that point we can at least follow up people quickly enough, but looking at the new data privacy regulations in Europe to come, are we allowed to? Non of the apps available takes care of people actively signing up to be mailed, it&apos;s just not a thing outside of the EU. Meaning, most of those apps are illegal to use inside the EU since data is not sufficiently encryted when stored.</p>
<p>PS: did you already think about, that after May 25th, EU-citizen leads coming from conferences might not be legal to email anymore. They haven&apos;t given their active agreement to be mailed by you? Just my 2 cents though.</p>
<p>Anyways back to the proposed product, which was supposed to address some more issues we haven&apos;t talked about yet and I&apos;m just naming a few solutions here, so guess what the issue is for yourself ;-)</p>
<p>In a quick round up, imagine a small hardware box:</p>
<ul>
<li>easy to use (for booth personal)</li>
<li>easy to understand (for attendees)</li>
<li>reusing existing tablets or phones</li>
<li>offering full control over the information you request from attendees</li>
<li>storing private information in a GDPR compliant way</li>
<li>synching to a cloud (if wanted) or</li>
<li>exporting information locally to a USB stick</li>
<li>provide features like fully electronic and lawfully stored raffles (everything&apos;s locally stored anyways)</li>
<li>etc.</li>
</ul>
<p>And the best thing, it&apos;s company property. <em>&quot;Buy once, use anywhere&quot;</em> if Sun Microsystems would create the slogan. Obviously additional features for the years to come were planned too, as well as third-party developer support.</p>
<p>But let&apos;s head back to the hardware development. As for the issue with certification, having companies buying your hardware device, that sole purpose is to be used at conferences around the world, you need plenty of certifications. Otherwise running the system would be illegal in some countries, especially when offering a WiFi signal to connect your tablets or phones.</p>
<p>That said, the major investment, apart from building the PCB, designing and prototyping the housing and the preproduction for a test-run, as well as the final first production run itself is certification. It&apos;s not in the millions but in the hundreds of thousands of Euros.</p>
<p>In the US probably peanuts compared to common investments in big tech-companies, in Europe, however, it seems like a pretty big deal. And eh, what happens if you can&apos;t sell the boxes? That is at least the common theme we&apos;ve seen. And forgive me to be blunt here, Germany is the main player in terms of conferences and trade fairs, we have data privacy rules close to the GDPR for years and overall we should be pretty interested in such a solution, but no.</p>
<p>Darn, this post is already way to long. Let me get to the conclusion after almost 6 months of trying.</p>
<ul>
<li>do not leave your employer before actually having a signed agreement with the bank or an Angel</li>
<li>if you do, have enough spare money to pay all your bills for the next few months (thankfully I did)</li>
<li>expect everything to fail miserably last minute (and I just scratched the surface of everything that went wrong)</li>
<li>figure out when to give up or delay</li>
</ul>
<p>Especially the last bit is important. It is hard, at least for someone from Germany, that you&apos;ve failed and that is time to step back. I&apos;m not saying giving up, in our case it is somewhat like delaying the project. Life goes on, even though the &quot;failing&quot; is staying over head for the year to come, at least in Germany. Not specially from yourself, but others. No forgiving culture in Germany for that regard - not yet.</p>
<p>Apart from that I&apos;m still all set to fix this issue. If hardware design is too complicated or too expensive, maybe it&apos;ll be just the software. Maybe it&apos;ll end up being an open-source project, downloadable to a RaspberryPi or BeagleBone, who knows ;-)</p>
<p>I still think those issues are worth fixing!</p>
<p>All right, enough of the talk. As the TLDR already suggested, I&apos;m looking for new opportunities now. Sitting at home got boring and I really want to get out again. Don&apos;t get me wrong, sure it&apos;s unfortunate that it didn&apos;t work out, but that&apos;s what life is. Sometimes things just don&apos;t work out. You don&apos;t need to feel sorry for me. It was definitely worth a try :-)</p>
<p>I&apos;m happy to help with all things developer relations and tech marketing. Also happy to help building communities or help out as an additional external contractor to an existing developer relations team.</p>
<p>If you&apos;re interested, feel free to contact me via email, twitter (DM is always open) or any way you like.</p>
<p>To finish off, I hope reading all of this didn&apos;t bore you too much. If there is further questions about the experiences made or what I would do differently the next time, or maybe just asking to help with preparations on your business plan or anything like that, let me know to. Happy to help whenever possible.</p>
<p>Sincerely,<br>
Chris</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[new Year, new Life, new Horizon]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>It&apos;s been a long time not posting anything to my blog. Main thing, this will change now as I have a lot to talk about over the course of the next months. Anyways let&apos;s get to business.</p>
<p>Sometimes you wake up in the morning and you</p>]]></description><link>http://blog.sourceprojects.org/2017/12/29/new-year-new-life-new-target-new-horizon/</link><guid isPermaLink="false">5a0300e9d4b6fe1256dc1ab0</guid><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Fri, 29 Dec 2017 05:00:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>It&apos;s been a long time not posting anything to my blog. Main thing, this will change now as I have a lot to talk about over the course of the next months. Anyways let&apos;s get to business.</p>
<p>Sometimes you wake up in the morning and you have the feeling that something special will happen. It doesn&apos;t happen a lot, but those days exist.</p>
<h2 id="thosedays">Those days</h2>
<p>For me one of those days happened about 4.5 years ago. It was late in the evening already and I had a little chit-chat with a guy working on one of the projects we were using at that time. I talked about what I&apos;m working on, how we use the software and that I have some ideas on what I wanted to do. Mainly implement a small wrapper based on the Apache project I was working on at the time. What happened next was a blast, the scariest and coolest thing that ever happened to me in all my life.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/11/Screen-Shot-2017-11-08-at-14.19.13.png" alt="Screen-Shot-2017-11-08-at-14.19.13" loading="lazy"></p>
<p>I must have had a pretty stupid look on my face since my girlfriend (today my wife, thanks honey to still be with me &#x1F618;) looked at me and asked if everything&apos;s alright. I wasn&apos;t really able to talk, I just pointed at my monitor and the message. I was flabbergasted.</p>
<p>Looking back at the story I think that was my face this evening.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/11/Affe.jpg" alt="Affe" loading="lazy"></p>
<p>Anyhow after a bit more discussion and my first flight to Istanbul ever, we agreed on the terms and it was settled.</p>
<p>Since I&apos;m German it was a pretty big deal going forward. People from the US (and other countries) might not totally understand but in Germany continuity and a safe job are two of the biggest things to achieve in life. Being a little rebel all my life I still accepted. I remember so many people asking me &quot;are you sure?&quot; or &quot;what happens if it doesn&apos;t work out?&quot;. It was a startup, it was (from a German perspective) a little risky but it had a bit of a thrill to it. I had to do it. Being paid for working on open source? A dream comes true.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/11/HazelcastLogo-Blue_Dark_800px-1.png" alt="HazelcastLogo-Blue_Dark_800px-1" loading="lazy"></p>
<h2 id="itwasablast">It was a blast</h2>
<p>Well time flew by so fast, here we are. More than 4 years later. The time was amazing, it was a blast. I met so many awesome people, I learned so much stuff, I didn&apos;t have any idea about. Sure, sometimes we had to go the extra mile. Nobody would argue that startups can be stressful but they&apos;re a really cool environment. For the first time in my life I had the feeling I could help shape a company.</p>
<p>So, what happened in those 4 years? I joined as one of the core engineers. I learned a lot about distributed systems, I learned a lot about the startup culture and about how open source can be transformed into a business.</p>
<p>Later on, I moved to a stage between marketing and engineering. Something were a lot of people were like &quot;marketing, seriously?&quot; but yeah it is fun. I was traveling a lot, I gave conference talks, gave trainings, talked to customers. A whole new world, again full of bits to learn from.</p>
<p>For a bit more than a year I was the company&apos;s main developer advocate (let&apos;s not call it evangelist anymore) and I worked closely with marketing, not only on blogs, webinars or collateral but on all fronts.</p>
<h2 id="thestoryends">The story ends</h2>
<p>So why am I writing all of this? Well first of all I want to tell a story, showing people that sometimes unexpected things just happen and sometimes you have to take the risky path. Before that there was nothing worse than stuck in a boring job and fight yourself every single morning to get up.</p>
<p>Secondly today is my last day at Hazelcast. The last years, as said, were amazing and I won&apos;t miss a single day. My colleagues is one of the best teams I&apos;ve ever had, and I hope friendships are to stay.</p>
<p>I wish all my colleagues ... no my friends at Hazelcast just the best. I love the product and I hope it&apos;ll be around for a long time!</p>
<p>I&apos;m also happy it&apos;s not a full goodbye for now, as I might show up at Hazelcast events or locations from time to time to help or just for fun &#x1F601;</p>
<h2 id="happyeverafter">Happy ever after</h2>
<p>The upcoming path is already chosen, and I hope it&apos;ll be a blast again. Some people say that if you ever worked in a startup, you can&apos;t go back to a normal company anymore. It&apos;s supposed to be some kind of a drug.</p>
<p>Actually, they might be right.</p>
<p>That said, in early 2018 I&apos;ll be around with my own, all new company. The company won&apos;t try to change the world with the next big microservices framework, serverless backend or In-Memory Data-Grid, it&apos;ll help companies to solve issues for conference and relationship marketing.</p>
<p>I know, I know, seriously? See my face? I&apos;m all serious. It&apos;s basically 4 years of experience, hate and wt... build into a product (line).</p>
<p>Given the fact, I talked to people over the course of the last year, some know what&apos;s upcoming. For all others, there will be a bigger explanation in the very near future. 2018 will be the year to make conference marketing cool again.</p>
<p>Stay tuned.</p>
<h2 id="finalwords">Final words</h2>
<p>I can&apos;t thank Hazelcast enough for the opportunity I got in 2013 and I&apos;m happy for the support moving along to my own company.</p>
<p>Thanks,</p>
<p>Chris</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The Hazelcast Incubator]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When <a href="http://www.hazelcast.com/?ref=blog.sourceprojects.org">Hazelcast</a> started as a pure open source project in 2008, there was one guys, Talip Ozturk, with an amazing vision: A simple, powerful, scalable Distributed Map.</p>
<p>Ever since Hazelcast went more powerful and added more features or data structures. Today almost all Java Collections and a lot of the</p>]]></description><link>http://blog.sourceprojects.org/2015/02/27/the-hazelcast-incubator/</link><guid isPermaLink="false">599418ca6b066a0afbf48086</guid><category><![CDATA[hazelcast]]></category><category><![CDATA[java]]></category><category><![CDATA[opensource]]></category><category><![CDATA[projects]]></category><category><![CDATA[advocate]]></category><category><![CDATA[community]]></category><category><![CDATA[enhancement]]></category><category><![CDATA[github]]></category><category><![CDATA[process]]></category><category><![CDATA[proposal]]></category><category><![CDATA[public]]></category><category><![CDATA[road-map]]></category><category><![CDATA[source]]></category><category><![CDATA[voice]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Fri, 27 Feb 2015 11:40:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>When <a href="http://www.hazelcast.com/?ref=blog.sourceprojects.org">Hazelcast</a> started as a pure open source project in 2008, there was one guys, Talip Ozturk, with an amazing vision: A simple, powerful, scalable Distributed Map.</p>
<p>Ever since Hazelcast went more powerful and added more features or data structures. Today almost all Java Collections and a lot of the Java Concurrency APIs are implement in a transparent distributed manner. The community helped a lot on the path with feature requests, bug reports, pull requests and discussions to make Hazelcast what it is today.</p>
<p>We always engaged people from the community to help forming Hazelcast after their visions and their needs. We gave our important community the voice it deserved.</p>
<p>Over the last year, a lot of growth happened in Hazelcast and we were busy with internal restructures, unfortunately the community support from Hazelcast side suffered by that.</p>
<p>We&apos;ve seen our mistake and want to go back to the strong community binding and even make it strong than ever before.</p>
<p>Today I am thrilled to announce to take over the <strong>Open Source Advocate</strong> role and start our new project, the <strong>Hazelcast Incubator</strong>.</p>
<h3 id="opensourceadvocate">Open Source Advocate</h3>
<p>As the newly appointed Hazelcast Open Source Advocate I am the voice of the community. I will represent the community, their needs, their visions and ideas inside the company. I will give our community the necessary voice while road-map planning.</p>
<p>I will lead the Hazelcast Incubator Process and help people expressing their ideas, encourage people to work together on community driven features and make sure that the communication between internal (employed) developers and our community is always active, constructive, productive and last but not least friendly.</p>
<p>For a long time now I am a 24/7 open source guy. My github profile shown my deep commitment to open source and especially to any kind of Apache licensed software. All that said, to make it short, I am happy to take that role and to make Hazelcast again to the community project it used to be!</p>
<h3 id="hazelcastincubator">Hazelcast Incubator</h3>
<p>The <a href="https://github.com/hazelcast-incubator?ref=blog.sourceprojects.org">Hazelcast Incubator</a> is an incubation process for external Hazelcast extensions. Incubation projects are kicked off using a so called <a href="https://hazelcast.atlassian.net/wiki/display/COM/Hazelcast+Enhancement+Proposals?ref=blog.sourceprojects.org">Hazelcast Enhancement Proposals</a>.</p>
<p>The basic idea of the Hazelcast Enhancement Proposal (&quot;HEP&quot;) follows what OpenJDK proved to work over the last years with their JEP (JDK Enhancement Proposal) process. From our prospective it doesn&apos;t make sense to try to come up with a new process idea if others already have found out how to solve those issues.</p>
<p>We were and we are always super excited about external contributions. We want to ensure to get community changes and features merged with minimum fuss for both sides but especially want to prevent declining a pull request with a lot of effort went in. It might does not meet quality standards, is not fully implemented (edge cases) or does not fit into the general Hazelcast vision - simple but powerful. Those situations are frustrating for both sides, Hazelcast and the original implementor. We don&apos;t want to decline merging it - as I said we love contributions - and the implementor wasted a lot of his free time.</p>
<p>To prevent those situations from happening in the future, a HEP is created, just as a JEP, and describes the main enhancement or feature. After acceptance of the proposal in co-orperation with the internal team that owns this part of the code-base, I will create a repository, a channel on Gitter and add interested people to github. All Gitter channels will be public by default and I encourage everyone that is interested on a certain HEP to join the channels and start discussing. Teamwork is the most important feature that open source offers.</p>
<h3 id="hazelcastincubatorprocess">Hazelcast Incubator Process</h3>
<p>Over the next days I will add all necessary information to our public wiki-page (Hazelcast Enhancement Proposals) and create a small micro-site listing a set of (maybe) interesting ideas the community might pick up, as well as an online-form to submit your own proposals. So long if you want to let me know about your idea just follow my mail address (<a href="mailto:chris@hazelcast.com">chris@hazelcast.com</a>) or hook me up on twitter (@noctarius2k).</p>
<h3 id="hazelcastdiscoveryspihep">Hazelcast Discovery SPI HEP</h3>
<p>Last but not least I also want to announce our first public HEP. <a href="https://hazelcast.atlassian.net/wiki/display/COM/HEP+2+-+Hazelcast+Discovery+SPI?ref=blog.sourceprojects.org">HEP 2 - Hazelcast Discovery SPI</a> will define a publicly available SPI to discover other members and clients in public or private clouds. There is a huge need for such a SPI for a long time and we want to attack this problem with the community.</p>
<p>I also want to thank <a href="http://abouthttp//about.me/saturnism.me/pires?ref=blog.sourceprojects.org">Paulo Pires</a> (<a href="https://twitter.com/el_ppires?ref=blog.sourceprojects.org">@el_ppires</a>) and <a href="http://about.me/saturnism?ref=blog.sourceprojects.org">Ray Tsang</a> (<a href="https://www.sourceprojects.org/?ref=blog.sourceprojects.org">@saturnism</a>) for their amazing responses to our new community approach, to jump in and help us work out what is necessary and define the SPI.</p>
<p>I look forward to the future of Hazelcast Incubator and what all of us together can achieve.</p>
<p>Tags : advocate, community, enhancement, github, hazelcast, java, open, process, project, proposal, public, road-map, source, voice</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[snowcast - Migration and Failover - Feature Complete]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When I started snowcast back at end of 2014 I haven&apos;t thought that people will really be interested but most of the times it will work out differently from your imagination. A still fairly small group of interested people showed up and I got a lot of nice</p>]]></description><link>http://blog.sourceprojects.org/2015/01/28/snowcast-migration-and-failover-feature-complete/</link><guid isPermaLink="false">599418116b066a0afbf48071</guid><category><![CDATA[apache2]]></category><category><![CDATA[distributed]]></category><category><![CDATA[hadoop]]></category><category><![CDATA[idgenerator]]></category><category><![CDATA[id generator]]></category><category><![CDATA[idgeneration]]></category><category><![CDATA[id generation]]></category><category><![CDATA[instagram]]></category><category><![CDATA[java]]></category><category><![CDATA[network]]></category><category><![CDATA[opensource]]></category><category><![CDATA[performance]]></category><category><![CDATA[projects]]></category><category><![CDATA[scaling]]></category><category><![CDATA[scalability]]></category><category><![CDATA[sequencer]]></category><category><![CDATA[snowflake]]></category><category><![CDATA[speed]]></category><category><![CDATA[twitter]]></category><category><![CDATA[uniqueness]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Wed, 28 Jan 2015 16:59:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>When I started snowcast back at end of 2014 I haven&apos;t thought that people will really be interested but most of the times it will work out differently from your imagination. A still fairly small group of interested people showed up and I got a lot of nice words and gratulations for that idea. It seems like it will take off over time.</p>
<h3 id="migrationandfailoversupport">Migration and Failover Support</h3>
<p>The last big piece of the puzzle that was a must feature from my side was support for graceful failover on node failures and migration on topology changes. snowcast now supports backups and creates complete backup partitions on other nodes in the cluster. In case of node failures those backups are used to recreate the data to stay in a consistent state. The number of backups is configurable but one backup by default. To find more information about the backup system and how to configure it please read the about backups in the <a href="https://github.com/noctarius/snowcast?ref=blog.sourceprojects.org#backups">README</a>.</p>
<p>The second thing, migration, handles partition movements when the cluster topology changes and the partition table layout is updated. After new nodes join or old ones leave there, normally, is a migration going on to rebalance the system over all cluster nodes and snowcast now takes place in this process! Please read more about it <a href="https://github.com/noctarius/snowcast?ref=blog.sourceprojects.org#migration-and-split-brain">here</a>.</p>
<p>A short word to split-brain situations:<br>
Those are still not handled gracefully. I have a few ideas how to solve those problems on re-merging the clusters after split-brain cause has been resolved but it wouldn&apos;t guarantee the time of the split-brain. I&apos;m open to any kind of idea and suggestion on how to solve that. Split-brain is such a big problem that this alone would make a change for a 2.0 version ;-).</p>
<h3 id="currentstatusfeaturecomplete">Current Status: Feature Complete</h3>
<p>On the other hand I steadly worked on the missing features and I&apos;m glad to announce that snowcast seems to me like <strong>FEATURE COMPLETE</strong> for a first GA release. I will add a lot more code comments, Javadocs to the public API classes and interfaces as well as improve the documentation. To make the documentation as clear as possible I request all people to copy-edit it and either send pull requests on github or create issues.</p>
<p>I plan 2 different release candidates until the final GA release, therefore I added a continuous build server and snapshots are deployed to Sonatypes snapshot repository.</p>
<h3 id="mavencoordinates">Maven Coordinates</h3>
<p>As most of you know the Maven repositories went into a de-facto standard over the last years. Next to Maven most build systems for Java build on top of the Maven repository design like SBT, Gradle, Ivy and much more, therefore I decided again to deploy the snowcast artifacts to a Maven repository and eventually the RC and GA releases will go into Maven central for convenience.<br>
The Maven coordinates for the artifacts are available <a href="https://github.com/noctarius/snowcast?ref=blog.sourceprojects.org#maven-coordinates">here</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[snowcast - Hazelcast Client and the snowcast logo]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h3 id="snowcast">snowcast</h3>
<p>In December I started a new project called snowcast. Arisen from the need in one of my own private projects I decided to open source this part of the work.</p>
<p>snowcast is an auto-configuration, distributed, scalable ID generator on top of Hazelcast. Since snowcast is not an official Hazelcast</p>]]></description><link>http://blog.sourceprojects.org/2015/01/07/snowcast-hazelcast-client-and-the-snowcast-logo/</link><guid isPermaLink="false">599417636b066a0afbf4805c</guid><category><![CDATA[apache2]]></category><category><![CDATA[distributed]]></category><category><![CDATA[hazelcast]]></category><category><![CDATA[idgenerator]]></category><category><![CDATA[id generator]]></category><category><![CDATA[idgeneration]]></category><category><![CDATA[id generation]]></category><category><![CDATA[instagram]]></category><category><![CDATA[java]]></category><category><![CDATA[network]]></category><category><![CDATA[opensource]]></category><category><![CDATA[performance]]></category><category><![CDATA[projects]]></category><category><![CDATA[scaling]]></category><category><![CDATA[scalability]]></category><category><![CDATA[sequencer]]></category><category><![CDATA[snowflake]]></category><category><![CDATA[speed]]></category><category><![CDATA[twitter]]></category><category><![CDATA[uniqueness]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Wed, 07 Jan 2015 16:54:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="snowcast">snowcast</h3>
<p>In December I started a new project called snowcast. Arisen from the need in one of my own private projects I decided to open source this part of the work.</p>
<p>snowcast is an auto-configuration, distributed, scalable ID generator on top of Hazelcast. Since snowcast is not an official Hazelcast project, Hazelcast will not offer any kind of commercial support for it, it is one of my private spare time projects!</p>
<h3 id="thelogo">The Logo</h3>
<p>To begin with snowcast now has its offical logo. I&apos;m not a graphic artist so don&apos;t expect too much from it ;-)</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/snowcast_name.png" alt="snowcast_name" loading="lazy"></p>
<h3 id="hazelcastclientsupport">Hazelcast Client Support</h3>
<p>In addition around Christmas I added Hazelcast Client Support to the snowcast core. Therefore it is now possible to use snowcast on Hazelcast clientside and still run with almost no network interaction. The behavior, as it is implemented, matches exactly a Hazelcast node.</p>
<p>Please find the quick documentation about how to use it on clientside in the README section at <a href="https://github.com/noctarius/snowcast?ref=blog.sourceprojects.org#hazelcast-clients">github</a>.</p>
<h3 id="whatsnext">What&apos;s next?</h3>
<p>snowcast still misses handling of migration scenarios and gracefully recover from split brain situations. I&apos;m not at the point to require that in my own project but eventually will come to that point. If anybody is eager to get this started feel free to contact me or send a pull request on github. I&apos;m also looking for interesting, maybe missing features so feel free to open issues and feature requests too.</p>
<p>That said, I still like to consider snowcast to be experimental but I want to finish it in the next months and I would love to see people trying to use it, issuing bug reports or just give happy or unhappy feedback. This is the only way it can grow :-)</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[snowcast - like christmas in the distributed Hazelcast world]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><strong>snowcast</strong> is an auto-configuration, distributed, scalable ID generator on top of <a href="http://www.hazelcast.org/?ref=blog.sourceprojects.org">Hazelcast</a>. Since snowcast is not an official Hazelcast project, Hazelcast will not offer any kind of commercial support for it, it is one of my private spare time projects!</p>
<h3 id="whythisproject">Why this project?</h3>
<p>While working on a side project I</p>]]></description><link>http://blog.sourceprojects.org/2014/12/15/snowcast-like-christmas-in-the-distributed-hazelcast-world/</link><guid isPermaLink="false">59940b086b066a0afbf47fbb</guid><category><![CDATA[apache2]]></category><category><![CDATA[distributed]]></category><category><![CDATA[hazelcast]]></category><category><![CDATA[id generation]]></category><category><![CDATA[id generator]]></category><category><![CDATA[idgeneration]]></category><category><![CDATA[idgenerator]]></category><category><![CDATA[instagram]]></category><category><![CDATA[java]]></category><category><![CDATA[network]]></category><category><![CDATA[opensource]]></category><category><![CDATA[performance]]></category><category><![CDATA[projects]]></category><category><![CDATA[scalability]]></category><category><![CDATA[scaling]]></category><category><![CDATA[sequencer]]></category><category><![CDATA[snowflake]]></category><category><![CDATA[speed]]></category><category><![CDATA[twitter]]></category><category><![CDATA[uniqueness]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Mon, 15 Dec 2014 09:41:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><strong>snowcast</strong> is an auto-configuration, distributed, scalable ID generator on top of <a href="http://www.hazelcast.org/?ref=blog.sourceprojects.org">Hazelcast</a>. Since snowcast is not an official Hazelcast project, Hazelcast will not offer any kind of commercial support for it, it is one of my private spare time projects!</p>
<h3 id="whythisproject">Why this project?</h3>
<p>While working on a side project I came across the need for a scalable ID generator. A couple of possible solutions were available but non of those as fast as possible but I found a few interesting ideas, one of them will be shown in this blogpost. As mentioned above, this is just my private project. I give support on best effort and I love to help and share the code. All source code is available under <a href="http://www.apache.org/licenses/LICENSE-2.0?ref=blog.sourceprojects.org">Apache License 2</a>, same as almost all of my open source projects</p>
<h3 id="theproblem">The Problem</h3>
<p>In distributed systems generating unique IDs is a problem. Either calculation is expensive, network traffic is involved or there is a chance of creating unexpected conflicting IDs. Especially the last problem is commonly only recognized when storing data to a relational database. The application would have to recognize this error and handle it gracefully.</p>
<h3 id="commonsolutions">Common Solutions</h3>
<p>A common practice for distributed, unique ID generation is to setup a ID generator service. All cluster members connect to that service if they need a new ID. The problem with this approach is the scalability or performance under high load. This may not be a problem for web applications but for low latency and high creation rate systems like gameservices or found in high frequency trading.</p>
<p>Another common problem is that distributed ID generators often pre-acquire big bunches of IDs from a central registry to minimize the network traffic involved. This prevents generated IDs from being sorted by &quot;creation-time&quot; which means items using those IDs can&apos;t be ordered by their creation time since the IDs are not consistently increasing.</p>
<p>A third practice is using UUIDs which also is not optimal. UUIDs are not guarateed to be unique but are in 99.99% of all cases. In addition there are multiple ways to generate UUIDs with more likely or unlikely to happen collisions. The latest specification requires native OS calls to gather the mac address to make it part of the UUID which are kind of costly.</p>
<h3 id="sowhatsnow">So What&apos;s Now?</h3>
<p>The goal is now to find an approach that solves the above mentioned problems:</p>
<ul>
<li>guaranteed uniqueness</li>
<li>low network interaction</li>
<li>order guarantee (natural ordering)</li>
<li>low latency, high rate generation</li>
</ul>
<p>An approach was offered by Twitter <a href="https://github.com/twitter/snowflake?ref=blog.sourceprojects.org">Snowflake</a>) which sadly seems to be discontinued. Still there are other implementations available, also on other languages and operating systems (such as <a href="http://instagram-engineering.tumblr.com/post/10853187575/sharding-ids-at-instagram?ref=blog.sourceprojects.org">Instagram</a> which doesn&apos;t seem to be open sources).</p>
<h3 id="thesolution">The Solution</h3>
<p>The following extremely scalable approach is to generate IDs based on 64 bits (it would be possible to use 128 bits as well) and split those bits into multiple chunks. The most common approach, found in the wild, seems to use 3 parts.</p>
<pre><code>| 41 bits                            | 13 bits    | 10 bits |
</code></pre>
<p>The first 41 bits store an offset of milliseconds to a customly defined epoch. Using 41 bits offer us about 69 years of milliseconds before we run out of new IDs. This should probably enough for most systems.</p>
<p>The next 13 bits store a unique logical cluster node ID which must be unique for a given point in time. Nodes are not required to retrieve the same cluster node ID over and over again but it must be unique while runtime. 13 bits offer us 8,192 (2^13) unique cluster node IDs.</p>
<p>The last 10 bits store the auto-incrementing counter part. This counter is increasing only per millisecond to guarantee the order of generated IDs to <em>almost</em> comply to the natural ordering requirement. Using 10 bits enables us to generate up to 1,024 (2^10) IDs per millisecond per logical cluster node.</p>
<p>The last two parts are able to be changed in the number of bits (e.g. less logical cluster nodes but more IDs per node). In any way this enables us to generate 8,388,608 (2^23) guaranteed unique IDs per millisecond.</p>
<h3 id="pseudoimplementation">Pseudo Implementation</h3>
<p>To set this up we need to define a custom epoch the milliseconds start at, as an example we imagine to start our epoch on January 1st, 2014 (GMT) and we want to generate an ID at March 3rd, 2014 at 5:12:12.</p>
<p>To generate our IDs we need to configure the custom epoch as:</p>
<pre><code>EPOCH_OFFSET = 1388534400000 (2014-01-01--00:00:00)
</code></pre>
<p>In addition every cluster node is required to know its own logical cluster node ID:</p>
<pre><code>LOGICAL_NODE_ID = 1234 (Unique Logical Cluster Node Id)
</code></pre>
<p>Knowing the ID bit offsets generating a new ID is now pretty straight forward:</p>
<pre><code>currentTimeStamp = 1393823532000 (2014-03-03--05:12:12)
 
epochDelta = currentTimeStamp - EPOCH_OFFSET =&gt; 5289132000
 
id = epochDelta &lt;&lt; (64 - 41)
id |= LOGICAL_NODE_ID &lt;&lt; (64 - 41 - 13)
id |= casInc(counter [0, 1024])
</code></pre>
<h3 id="whysnowcast">Why snowcast?</h3>
<p>As you might already have guessed, snowcast is a wordplay based on Snowflake and Hazelcast.</p>
<p>Hazelcast is a distributed cluster environment to offer partitioned in-memory speed. It is the perfect background system to build snowcast on top of. Using Hazelcast, snowcast offers auto-configuration for logical cluster node IDs and fast startup times.</p>
<p>Snowflake was the base of this implementation idea so I love to reflect it in the name and giving credits to the amazing guys at Twitter!</p>
<h3 id="usageofsnowcast">Usage of snowcast</h3>
<p>To use snowcast you obviously need a running Hazelcast cluster. Cluster nodes can easily be integrated into snowcast then.</p>
<p>In snowcast the ID generators are called <code>SnowcastSequencer</code>. Those <code>com.noctarius.snowcast.SnowcastSequencers</code> are generated based on a few simple configuration properties that can be passed into the factory function. <code>SnowcastSequencer</code>s, as all Hazelcast structures, are referenced by a name. This name is bound to the configuration when the sequencer is first being acquired. They cannot be changed without destroying and recreating the <code>SnowcastSequencer</code>.</p>
<p>To retrieve a <code>SnowcastSequencer</code> we first have to create a snowcast instance which acts as a factory to create or destroy sequencers.</p>
<pre><code>HazelcastInstance hz = getHazelcastInstance();
Snowcast snowcast = SnowcastSystem.snowcast( hz );
</code></pre>
<p>In addition to our <code>com.noctarius.snowcast.Snowcast</code> factory, a custom epoch must be created to define the offset from the standard linux timestamp. The <code>com.noctarius.snowcast.SnowcastEpoch</code> class offers a couple of factory methods to create an epoch from different time sources.</p>
<pre><code>Calendar calendar = GregorianCalendar.getInstance();
calendar.set( 2014, 1, 1, 0, 0, 0 );
SnowcastEpoch epoch = SnowcastEpoch.byCalendar( calendar );
</code></pre>
<p>Preparations are done by now. Creating a <code>Sequencer</code> using the <code>Snowcast</code> factory instance and the epoch, together with a reference name, is now as easy as the following snippet:</p>
<pre><code>SnowcastSequencer sequencer = snowcast
    .createSequencer( &quot;sequencerName&quot;, epoch );
</code></pre>
<p>That&apos;s it, that is the sequencer. It is immediately available to be used to create IDs.</p>
<p>Every call to the <code>Snowcast::createSequencer</code> method must pass in the same configuration on every node! A call with a different configuration will result in a <code>SnowcastSequencerAlreadyRegisteredException</code> be thrown.</p>
<pre><code>long nextId = sequencer.next();
</code></pre>
<p>The <code>SnowcastSequencer::next</code> operation will return as fast as a ID is available. Depending on how many IDs can be generated per millisecond (how to configure the number of generatable IDs will be shown later in the blogpost) the operation will return immediately with the new ID, if the number of IDs for this millisecond (and node) is exceeded, the method blocks until it can retrieve the next ID. All ID generation is a local only operation, no network interaction is required!</p>
<p>This is basically it, the last step is to destroy sequencers eventually (or shutdown the cluster ;-)). To destroy a <code>SnowcastSequencer</code> the following snippet is enough.</p>
<pre><code>snowcast.destroySequencer( sequencer );
</code></pre>
<p>Destroying a sequencer is a cluster operation and will destroy all sequencers referred to by the same name on all nodes. After that point the existing <code>SnowcastSequencer</code> instances are in a destroyed state and cannot be used anymore. The dufferent sequencer states will be discussed in a bit.</p>
<h3 id="multithreading">Multithreading</h3>
<p><code>SnowcastSequencer</code>s and <code>Snowcast</code> factories are threadsafe by design. They are meant to be used by multiple threads concurrently. Sequencers are guaranteed to never generate the same ID twice. Creating and destroying a sequencer is also threadsafe and destroyed sequencers cannot be used anymore after the sequencer was destroyed.</p>
<h3 id="sequencerstates">Sequencer States</h3>
<p>Retrieved sequencers can be in three different states. Those states describe if it is possible to generate IDs at a given point in time or not.</p>
<p>Possible states are:</p>
<ol>
<li><code>Attached</code>: A <code>SnowcastSequencer</code> in the state <code>Attached</code> has a logical node id assigned and can be used to generate IDs.</li>
<li><code>Detached</code>: A <code>SnowcastSequencer</code> in the state <code>Detached</code> is not destroyed but cannot be used to generate IDs since there is no logical node id assigned.</li>
<li><code>Destroyed</code>: A <code>SnowcastSequencer</code> in the state <code>Destroyed</code> does not have a legal configuration anymore. This instance can never ever be used again to generate IDs. A sequencer with the same referral name might be created again at that point.</li>
</ol>
<p>By default, right after creation of a <code>SnowcastSequencer</code>, the state is <code>Attached</code>. In this state IDs can be generated by calling <code>SnowcastSequencer::next</code>.</p>
<p>At lifetime of the sequencer the state can be changed back and forth from <code>Attached</code> to <code>Detached</code> (and otherwise) an unlimited number of times. This might be interesting if less logical node ids are configured than actual nodes exist. Nodes can detach themselves whenever there is no need to generate IDs at a given time. Attaching and Detaching are single round-trip remote operations to the owning node of the sequencer.</p>
<p>To detach a sequencer the <code>SnowcastSequencer::detachLogicalNode</code> method is called. This call blocks until the owning node of the sequencer has unregistered the logical node id from the calling node. At this point no new IDs can be generated. A call to <code>SnowcastSequencer::next</code> will throw a <code>SnowcastStateException</code> to indicate that the sequencer is in the wrong state.</p>
<p>To re-attach a sequencer a call to the <code>SnowcastSequencer::attachLogicalNode</code> method will perform the necessary assignment operation for a logical node id. Most likely this will not be the same logical node id as previously assigned to the node! After the call returns, IDs can be generated again.</p>
<p>Any call to <code>Snowcast::destroySequencer</code> will immediately destroy the given sequencer locally and remotely. The sequencer cannot be used to generate IDs anymore afterwards. It can also not re-attached anymore!</p>
<h3 id="numberofnodes">Number of Nodes</h3>
<p>By default the number of possible nodes defaults to 2^13 (8,192) nodes. This means, as described earlier, that 2^10 (1,024) IDs can be generated per millisecond per node. The overall number of IDs per millisecond is 2^23 (8,388,608) and cannot be changed but it is possible to change the IDs per nodes by decreasing the bits for the logical node ids.</p>
<p>The number of nodes can be set per <code>SnowcastSequencer</code> and will, after creation, be part of the provisioned sequencer configuration. It cannot be changed until destroy and recreation of the sequencer. The node count can be set to any power of two between 128 and 8,192. All given non power of two counts will be rounded up to the next power of two. The smaller the number of nodes the bigger the number of IDs per node.</p>
<p>To configure the number of nodes just pass in an additional parameter while creating the sequencer.</p>
<pre><code>SnowcastSequencer sequencer = snowcast
    .createSequencer( &quot;sequencerName&quot;, epoch, 128 );
</code></pre>
<p>This way only 7 bits are used for the logical node id and the rest can be used to generate IDs, giving a range of 65,536 possible IDs per millisecond and per node.</p>
<h3 id="conclusion">Conclusion</h3>
<p>As seen above it is possible to generate unique IDs with almost no network actions at all.</p>
<p>The framework is not yet fully production ready as it not handles migration and does not offer Hazelcast client support yet but you can find the project at my github account (<a href="https://github.com/noctarius/snowcast?ref=blog.sourceprojects.org">snowcast github repository</a>) and in the official Maven repositories at the point when both missing features are implemented. If there is anything that would be amazing to be implemented, just let me know, create a feature request or pull request! :-)</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[nginx - Stateless Loadbalancer - Balance your load on nginx without proxying the request]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Today I want to show how you can use <a href="http://nginx.org/?ref=blog.sourceprojects.org">nginx</a> to build your own stateless loadbalancer which just redirects your requests to random servers. It does neither support sticky sessions nor does it proxy your request. It will redirect (HTTP 302) your original request to the random location.</p>
<p>This is</p>]]></description><link>http://blog.sourceprojects.org/2014/04/10/nginx-stateless-loadbalancer/</link><guid isPermaLink="false">599408986b066a0afbf47f9c</guid><category><![CDATA[hazelcast]]></category><category><![CDATA[java]]></category><category><![CDATA[balancer]]></category><category><![CDATA[cdn]]></category><category><![CDATA[client]]></category><category><![CDATA[splits]]></category><category><![CDATA[configuration]]></category><category><![CDATA[content]]></category><category><![CDATA[curl]]></category><category><![CDATA[delivery]]></category><category><![CDATA[load]]></category><category><![CDATA[loadbalancer]]></category><category><![CDATA[network]]></category><category><![CDATA[nginx]]></category><category><![CDATA[perl]]></category><category><![CDATA[stateless]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Thu, 10 Apr 2014 19:20:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Today I want to show how you can use <a href="http://nginx.org/?ref=blog.sourceprojects.org">nginx</a> to build your own stateless loadbalancer which just redirects your requests to random servers. It does neither support sticky sessions nor does it proxy your request. It will redirect (HTTP 302) your original request to the random location.</p>
<p>This is pratical for a lot of different use cases where you either want to just distribute static content as you would otherwise do with Content Delivery Networks (CDN) or if you have a cluster of servers that is capable of clustering your HTTP sessions (for example using <a href="http://www.hazelcast.com/?ref=blog.sourceprojects.org">Hazelcast</a>. It also works amazingly good for PHP (or node.js) based mini games that always read data from the database.</p>
<p>nginx is a small non-blocking, event-driven webserver which handles 10k+ connection with no problem. The original internal loadbalancing option is used for proxying the request through nginx to the backend endpoint. If you have dynamic servers or you need sticky sessions that might be what you want. If you have just static content or your backend servers can handly stateless sessions you might not want to proxy the requests since all connections would need to be held open until the backend request is processed.</p>
<p>We will create a small Perl script that supports a simple healthcheck (connects to the remote socket) and selects a random server on request. If the randomly selected server is not available we retry up to 10 times. Eventually we either have found an active server or we return an HTTP status 503 that the service is currently not available.<br>
Also this example is based on Ubuntu but installation on other Linux derivates should be kind of similar and differs only in installation of nginx.</p>
<p>So how do we set it up?</p>
<p><strong>1. Install nginx stable on Ubuntu:</strong><br>
The nginx versions of Ubuntu are sometimes a bit old or do not support all required functionality so we&apos;re going to setup the PPA for nginx first as described <a href="http://wiki.nginx.org/Install?ref=blog.sourceprojects.org#Ubuntu_PPA">here</a>.</p>
<pre><code>$ sudo add-apt-repository ppa:nginx/stable
$ sudo apt-get update
$ sudo apt-get install nginx-extras libio-socket-ssl-perl curl
</code></pre>
<p><strong>2. Create the perl load balancer script:</strong></p>
<pre><code>$ sudo nano /etc/nginx/loadbalancer.pm
</code></pre>
<p>Let&apos;s start with the header of the perl file.</p>
<pre><code>package loadbalancer;
 
use nginx;
use IO::Socket;
 
## Available servers
%servers = (
  &quot;cdn1.example.com&quot; =&gt; 0,
  &quot;cdn2.example.com&quot; =&gt; 0
);
</code></pre>
<p>We set a package name and import some external libs, additionally we setup our servers we want to load balance on. In the given example we setup 2 servers but you can set as many as you want to. The initial 0 means &apos;server is unavailable&apos; but this will change automatically if the server is reachable by the later healthcheck. The full script can be downloaded here: <a href="https://u17040722.dl.dropboxusercontent.com/u/17040722/blog/loadbalancer.pm?ref=blog.sourceprojects.org">loadbalancer.pm</a></p>
<pre><code>## Request load balancer
sub load_balance {
  # Initialize retry counter
  my $retry = 0;
 
  while($retry &lt; 10) {
    # Get a random number
    my $rand = int(rand(1000000));
     
    # Get keys from the map
    my @keys = keys(%servers);
 
    # Calculate index based on
    # random number and selected server
    my $index = $rand % scalar(@keys);
    my $selected_server = @keys[$index];
 
    # If server is activated by healthcheck
    # we can return it to nginx
    my $active = $servers{$selected_server};
    if ($active) {
      return &quot;http://&quot;.@keys[$index];
    }
 
    # Retry with another one
    $retry++;
  }
  # No server seems available
  return &quot;No Server Available&quot;;
}
</code></pre>
<p>This subroutine is used to select a server from the previously created map. We create a random value and use the modulo operator to retrieve the index inside the array. If the server is not available we just retry a few times. If you want to you could add some sleep value to maybe update the servers variables using the healthcheck, should be no problem to add this.</p>
<pre><code>## Connects to the given servers one by one
## and checks availability
sub healthcheck {
  # Update variable
  my %update;
   
  # Loop through servers
  foreach $server (keys %servers) {
    $key = $server;
   
    # Select port, defaults to 80
    my $port = 80;
    if (index($server, &apos;:&apos;) != -1) {
      @tokens = split(/:/, $server);
      $server = @tokens[0];
      $port = int(@tokens[1]);
    }
 
    # Connect to server
    my $socket = $socket = IO::Socket::INET-&gt;new(
        PeerAddr =&gt; $server,
        PeerPort =&gt; $port,
        Timeout =&gt; 5
    );
     
    # Is server connectable
    if (defined $socket) {
      $update{$key} = 1;
      $socket-&gt;close();
    } else {
      $update{$key} = 0;
    }
  }
   
  # Update servers variable with availabilities
  %servers = %update;
  return OK;
}
</code></pre>
<p>This subroutine now creates the healthcheck itself. It iterators through all values inside the servers map and tries to connect to the given address. The servers address can either be the host itself (then port 80 is assumed) or you give a full hostname:port for a special portnumber. Since the socket is only connected but no data is read this will work for HTTP and HTTPS.<br>
No finialize the file so that nginx is happy for startup and we immediately execute our first healthcheck on restart.</p>
<pre><code># Initiate immediate health check on server start
healthcheck();
 
1;
__END__
</code></pre>
<p><strong>3. Configure nginx:</strong></p>
<p>We open the default site configuration and configure our location endpoints.</p>
<pre><code>$ sudo nano /etc/nginx/sites-enabled/default
</code></pre>
<p>Delete the complete content and paste in the below configuration</p>
<pre><code>perl_require /etc/nginx/loadbalancer.pm;
perl_set $redirectSite loadbalancer::load_balance;

server {
  listen   80 default;
  server_name  cdn.example.com;

  access_log  /var/log/nginx/localhost.access.log;

  location / {
    if ($redirectSite = &quot;No Server Available&quot;) {
      return 503;
    }

    rewrite ^(.*)$ $redirectSite$1? redirect;
  }

  location /healthcheck {
    allow 127.0.0.1;
	deny all;
    default_type text/plain;
    perl loadbalancer::healthcheck;
    echo Thanks;
  }
}
</code></pre>
<p>The first two lines are embedding our <a href="https://u17040722.dl.dropboxusercontent.com/u/17040722/blog/loadbalancer.pm?ref=blog.sourceprojects.org">loadbalancer.pm</a> Perl module into the configuration and execute the load_balance subroutine for every request. The result is written to the $redicteSite variable for later use.<br>
We configure our default site to listen to port 80 on whatever domain is configured for this host and set a hostname for the HTTP headers. Additionally we setup an access log (in the expected location) and create the default location &quot;/&quot;.<br>
The $redirectSite variable is now tested for the string that tells us that no server is available to return to the user and if this is the case we return a HTTP 503 status. Otherwise we use the URL inside the variable for our HTTP status 302 redirect.</p>
<p>The location &quot;/healthcheck&quot; is available only on localhost and executes the healthcheck function. We will setup a cronjob in a second to periodically call this subfunction.</p>
<pre><code>$ sudo service nginx restart
</code></pre>
<p><strong>4. Test server and configure cronjob:</strong></p>
<p>First we&apos;ll test if our load balancer works as expected, therefor we request a healthcheck and a url redirect.</p>
<pre><code>$ curl http://127.0.0.1/healthcheck
Thanks
$ curl http://127.0.0.1
&lt;html&gt;
&lt;head&gt;&lt;title&gt;302 Found&lt;/title&gt;&lt;/head&gt;
&lt;body bgcolor=&quot;white&quot;&gt;
&lt;center&gt;&lt;h1&gt;302 Found&lt;/h1&gt;&lt;/center&gt;
&lt;hr&gt;&lt;center&gt;nginx/1.4.1&lt;/center&gt;
&lt;/body&gt;
&lt;/html&gt;
</code></pre>
<p>If something similar to the above happens you&apos;re fine, otherwise start again at point 1 ;-)<br>
Now we want to configure a cronjob to periodically call the healthcheck to stay up to date on server problems. We use a one minute healthcheck, if you have higher contraints on serverlost you might want to get a lower healthcheck using another scheduler.</p>
<pre><code>$ sudo crontab -e
* * * * * curl http://127.0.0.1/healthcheck
</code></pre>
<p>Exit the editor and save the new configuration, it will automatically be installed. For higher security reasons you might want to install the healthcheck job on another user than root. Make sure your healthcheck works as expected.</p>
<pre><code>$ tail -f /var/log/nginx/localhost.access.log
</code></pre>
<p>If a request to the healthcheck comes in on regular basis everything should be fine to now test your load balancing on a real browser. The more servers you add as possible backends the better will be your load balancing. Next to the here shown version with a randomly selected server you also could use a counter value that is always incremented to work in some round-robin alike way. The given basic script should make it easy to add more ways of balance your load and I would be happy to see some more additions in the comments below.</p>
<p>PS: I&apos;m not a Perl programmer so I&apos;m almost sure the code above can be achieved easier or be prettified. I open for suggestions :)</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Hazelcast MapReduce on GPU (APRIL'S FOOL!)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h2 id="aprilsfool">APRIL&apos;S FOOL:</h2>
<p>Sorry that I have to admit it was just an April&apos;s Fool. The interesting fact btw is that when I first came up with the idea it sounded like totally implausible but while writing I realized &quot;hey that should actually be possible&quot;</p>]]></description><link>http://blog.sourceprojects.org/2014/04/01/hazelcast-mapreduce-on-gpu-aprils-fool/</link><guid isPermaLink="false">599406216b066a0afbf47f90</guid><category><![CDATA[java]]></category><category><![CDATA[aparapi]]></category><category><![CDATA[aprilsfool]]></category><category><![CDATA[gpu]]></category><category><![CDATA[hazelcast]]></category><category><![CDATA[mapreduce]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Tue, 01 Apr 2014 05:40:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="aprilsfool">APRIL&apos;S FOOL:</h2>
<p>Sorry that I have to admit it was just an April&apos;s Fool. The interesting fact btw is that when I first came up with the idea it sounded like totally implausible but while writing I realized &quot;hey that should actually be possible&quot;. Maybe not yet for a map-reduce framework but definitely to back up a distributed fractal calculation what is I will look into at the next time. If somebody want to team up on this I&apos;m fully open for requests. Just don&apos;t hesitate to contact me.<br>
While this was a prank at the current time, I&apos;m really looking forward to eventually bring distributed environments like Hazelcast to the GPU - at least when Java 9 will feature <a href="http://openjdk.java.net/projects/sumatra/?ref=blog.sourceprojects.org">OpenJDK Project Sumatra</a> and a big thanks to the guys from AMD and Rootbeet that started all that movement!</p>
<p>Since the last decade RAM and CPU always got faster but still some calculations can be done faster in GPUs due to their nature of data. At Hazelcast we try to make distributed calculations easy and fast for everybody.</p>
<p>While having some spare time I came up with the idea of moving data calculations to the GPU to massively scale it out and since I created the map-reduce framework on the new Hazelcast 3.2 version it was just a matter of time to make it working with a GPU.</p>
<p><strong>Disclaimer:</strong> Before you read on I want to make sure that you understand that this is neither an official Hazelcast project nor it is yet likely to be part of the core in the near future but as always you may expect the unexpected!</p>
<p>So as mentioned above I was playing around with Aparapi. Aparapi is a Java binding for transforming Javs bytecode into OpenCL code developed by AMD (<a href="https://code.google.com/p/aparapi?ref=blog.sourceprojects.org">Aparapi</a>). There are similar projects going on so eventually we might see this working on all JVM by nature (<a href="http://openjdk.java.net/projects/sumatra/?ref=blog.sourceprojects.org">OpenJDK Project Sumatra</a>, <a href="https://github.com/pcpratts/rootbeer1?ref=blog.sourceprojects.org">Rootbeer</a>) but currently Aparapi seems to be the only one that was working for me.</p>
<p>Before yoy can begin you have to make sure that your BIOS / mainboard support <a href="http://en.wikipedia.org/wiki/IOMMU?ref=blog.sourceprojects.org">IOMMU</a> which offers the possibility for CPU and GPU to access the same memory space. In addition to that you have to install multiple drivers and libraries. I just made some basic screenshots to step quickly through it.</p>
<p>The screenshots do not show the real process because I made them using a VirtualBox Mint VM. The original installation wasn&apos;t captured in screenshots :-(<br>
For exact / updated installation steps please read the official setup manual: <a href="https://code.google.com/p/aparapi/wiki/SettingUpLinuxHSAMachineForAparapi?ref=blog.sourceprojects.org">SettingUpLinuxHSAMachineForAparapi</a>.<br>
Another important thing is that this seems to work only on Ubuntu (and derivates) at the moment, especially because you need a custom kernel.</p>
<p>So lets quickly run through the installation steps to just make clear how it would work in general:</p>
<p>First step would be to setup your BIOS / UEFI to enable IOMMU on the operating system side. To see how this works please consult your mainboard manual (or Google that shit ;-)).<br>
If this is fine you can go on installing the required drivers, kernels and libraries.</p>
<p><strong>1. Install HSA enabled kernel + HSA driver:</strong></p>
<pre><code>$ cd ~ # I put all of this in my home dir
$ sudo apt-get install git
</code></pre>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/install-git.png" alt="install-git" loading="lazy"></p>
<pre><code>$ cd ~ # I put all of this in my home dir
$ git clone https://github.com/HSAFoundation/\
  Linux-HSA-Drivers-And-Images-AMD.git
</code></pre>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/clone-hsa.png" alt="clone-hsa" loading="lazy"></p>
<pre><code>$ cd ~ # I put all of this in my home dir
$ curl -L https://github.com/HSAFoundation/\
  Linux-HSA-Drivers-And-Images-AMD/archive/\
  master.zip &gt; drivers.zip
$ unzip drivers.zip
</code></pre>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/download-kernel.png" alt="download-kernel" loading="lazy"></p>
<pre><code>$ cd ~/Linux-HSA-Drivers-And-Images-AMD
$ echo  &quot;KERNEL==\&quot;kfd\&quot;, MODE=\&quot;0666\&quot;&quot; |\
  sudo tee /etc/udev/rules.d/kfd.rules 
$ sudo dpkg -i ubuntu13.10-based-alpha1/\
  linux-image-3.13.0-kfd+_3.13.0-kfd+-2_amd64.deb
$ sudo cp ~/Linux-HSA-Drivers-And-Images-AMD/\
  ubuntu13.10-based-alpha1/xorg.conf /etc/X11
$ sudo reboot
</code></pre>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/install-kernel.png" alt="install-kernel" loading="lazy"></p>
<p><strong>2. Install OKRA Runtime:</strong></p>
<pre><code>$ cd ~ # I put all of this in my home dir
$ git clone https://github.com/HSAFoundation/\
  Okra-Interface-to-HSA-Device.git
</code></pre>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/clone-okra.png" alt="clone-okra" loading="lazy"></p>
<pre><code>$ cd ~ # I put all of this in my home dir
$ curl -L https://github.com/HSAFoundation/\
  Okra-Interface-to-HSA-Device/archive/\
  master.zip &gt; okra.zip
$ unzip okra.zip
</code></pre>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/unzip-okra.png" alt="unzip-okra" loading="lazy"></p>
<pre><code>$ cd ~/Okra-Interface-to-HSA-Device/okra/samples/
$ sh runSquares.sh
</code></pre>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/run-script.png" alt="run-script" loading="lazy"></p>
<p>The last step should be successful on your machine if you want to try Aparapi on your own, essentially it failed on the VirtualBox VM :-)</p>
<p><strong>3. Install OpenCL drivers:</strong></p>
<p>Go to <a href="http://developer.amd.com/tools-and-sdks/heterogeneous-computing/amd-accelerated-parallel-processing-app-sdk/downloads?ref=blog.sourceprojects.org">http://developer.amd.com/tools-and-sdks/heterogeneous-computing/amd-accelerated-parallel-processing-app-sdk/downloads</a> and download the AMD-APP-SDK, I chose the same version as in the setup documentation 2.9.</p>
<pre><code>$ cd ~ 
$ gunzip ~/Downloads/AMD-APP-SDK-v2.9-lnx64.tgz
$ tar xvf ~/Downloads/AMD-APP-SDK-v2.9-lnx64.tar
$ rm ~/default-install_lnx_64.pl ~/icd-registration.tgz\
  ~/Install-AMD-APP.sh ~/ReadMe.txt
$ gunzip ~/AMD-APP-SDK-v2.9-RC-lnx64.tgz
$ tar xvf ~/AMD-APP-SDK-v2.9-RC-lnx64.tar
$ rm ~/AMD-APP-SDK-v2.9-RC-lnx64.tar
$ rm -rf AMD-APP-SDK-v2.9-RC-lnx64/samples
</code></pre>
<p><strong>4. Install Aparapi and build the JNI libs:</strong></p>
<pre><code>sudo apt-get install ant g++ subversion
svn checkout https://aparapi.googlecode.com/\
  svn/branches/lambda aparapi-lambda
cd ~/aparapi-lambda
. env.sh
ant
</code></pre>
<p>And that should be it, now you should be able to run the examples.<br>
After we installed Aparapi we now can have a look at Aparapi enabled map-reduce and what performance looks like.<br>
Do to a limited set of datatypes the following example code looks kind of weird. Neither Strings nor char-array is supported at the moment so we do some hacks to map the String to a char-array first and then the char-array to and int-array and the other way back after calculation.</p>
<p>The example is using the well known word-count map-reduce &quot;hello world&quot; and is pretty much the same as in the Hazelcast map-reduce documentation, so I&apos;ll skip on how the Mapper, Combiner, Reducer will look like. I also guess most map-reduce users can guess the general part :-)</p>
<p>So let&apos;s have a look at the execution source:</p>
<pre><code class="language-java">public class HazelcastAparapi {
  public static void main(String[] args) {
    HazelcastInstance hz = newHazelcastInstance();
 
    IMap&lt;String, String&gt; docs = hz.getMap(&quot;docs&quot;);
    documents.addAll(readDocuments());
 
    // Create a special JobTracker for use of Aparapi
    JobTracker tracker = new AparapiJobTracker(hz);
 
    // Special KeyValueSource to make access from GPU
    // trough IOMMU possible.
    // The int-array in reality is a char-array but
    // char-arrays are not yet supported by Aparapi
    KeyValueSource&lt;int[], Long&gt; source =
        new AparapiKeyValueSource(docs);
 
    // We have to work around the problem, that the
    // GPU is only able to access similarly sized 
    // data value so we use a fixed sized int-array
    Job&lt;int[], Long&gt; job = tracker.newJob(source);
 
    // Now we define the map-reduce job as normally
    // but we do not submit it since we have to pass
    // it to the kernel
    job.mapper(new WordCountMapper())
        .combiner(new WordCountCombinerFactory())
        .reducer(new WordCountReducerFactory());
 
    // Initialize the Aparapi Kernel
    Kernel kernel = new MapReduceAparapiKernel(job);
 
    try {
      // Fire up the execution
      kernel.execute(HUGE);
 
      // Retrieve the results
      AparapiJob aj = (AparapiJob) job;
      Map&lt;int[], Long&gt; result = aj.getResult();
 
      // Remap the int-array (char-array) to strings
      Map&lt;String, Long&gt; values = mapToStrings(result);
 
      // Show the results
      Set&lt;...&gt; entrySet = values.entrySet();
      for (Map.Entry&lt;String, Long&gt; entry : entrySet) {
        System.out.println(entry.getKey()
            + &quot; was found &quot; + entry.getValue()
            + &quot; times.&quot;);
      }
 
    } finally {
      // Shutdown the Aparapi kernel
      kernel.dispose();
    }
  }
}
</code></pre>
<p>As mentioned before a int-array hack is necessary to make this code work. On the other hand we need to use a special JobTracker which is only slightly different from the original but returns a non distributed Job instance (sorry not yet working ;-)). In addition we have to instantiate a Aparapi kernel subclass which handles the offload to the GPU as well as transforming the Java bytecode into OpenCL for us.</p>
<pre><code class="language-java">public class MapReduceAparapiKernel
    extends Kernel {
  // ... left out code
 
  public void run() {
    // Test execution for GPU
    EXECUTION_MODE em = kernel.getExecutionMode();
    if (!em.equals(Kernel.EXECUTION_MODE.GPU)) {
      throw new IllegalStateException(
          &quot;GPU execution not possible&quot;);
    }
 
    AparapiJob job = (AparapiJob) getJob();
    job.execute(this);
  }
}
</code></pre>
<p>The actual codebase is not very clean and looks more like a hack but I hope to opensource it anytime soon. Additionally this isn&apos;t yet very extended in it&apos;s use cases since there are many things that needs to be done and hacks are likely to be added a lot (char[] as int[]) but anyways I made a few quick performance tests with the original example from the documentation against the example above. Runtime only includes the runtime of the map-reduce operation itself and does not contain time to forward and backward map the strings to int[], so time measuring might be a little bit unfair :-)<br>
In addition even the default Hazelcast example is only running on a single node at the moment since the Aparapi version isn&apos;t possible to be run in a distributed environment yet (sadly I don&apos;t have enough server machines at home to test that).</p>
<p>So what does the performance look like?</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/performance-aparapi.png" alt="performance-aparapi" loading="lazy"></p>
<p>We see that the basic startup time for Aparapi seems a bit higher than on the default implementation of Hazelcast but it relativizes over the whole runtime. At the moment this is only a little bit faster but code is mostly hacked down and I&apos;m not sure how much Aparapi is optimized or if Sumatra will bring better performance in the end.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Jetty + Spdy = Awesome!]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This weekend I testwise activated Spdy for HTTPS connections.</p>
<p>So if you access the page you should see a Spdy connection and needed resources should be automatically being pushed to your browser.</p>
<p>If you want to know more about Spdy, HTTP 2.0 and how it can speed up the</p>]]></description><link>http://blog.sourceprojects.org/2013/11/05/jetty-spdy-awesome/</link><guid isPermaLink="false">5994168e6b066a0afbf48022</guid><category><![CDATA[apache]]></category><category><![CDATA[eclipse]]></category><category><![CDATA[google]]></category><category><![CDATA[http]]></category><category><![CDATA[http/2.0]]></category><category><![CDATA[http2]]></category><category><![CDATA[jetty]]></category><category><![CDATA[servlet]]></category><category><![CDATA[servlet container]]></category><category><![CDATA[spdy]]></category><category><![CDATA[tomcat]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Tue, 05 Nov 2013 09:23:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This weekend I testwise activated Spdy for HTTPS connections.</p>
<p>So if you access the page you should see a Spdy connection and needed resources should be automatically being pushed to your browser.</p>
<p>If you want to know more about Spdy, HTTP 2.0 and how it can speed up the internet just have a look at the following resources:</p>
<p><a href="http://www.infoq.com/presentations/SPDY?ref=blog.sourceprojects.org">http://www.infoq.com/presentations/SPDY</a><br>
<a href="http://zoompf.com/blog/2013/04/maximizing-spdy-and-ssl-performance?ref=blog.sourceprojects.org">http://zoompf.com/blog/2013/04/maximizing-spdy-and-ssl-performance</a></p>
<p>Due to activation of Spdy I switched from Apache Tomcat to Eclipse Jetty as the hosting servlet container. So thanks to the guys from Jetty project - this is awesome!</p>
<p>I am using Jetty for years now as the main embedded HTTP container either with or without Servlet support but this is really the first time I was using it as a standalone servlet container - it just worked!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Installing Ubuntu 13.10 on a MacBook Air 6.2]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Hey there,</p>
<p>Today I will show you how to install Ubuntu 13.10 on a Macbook Air 6.2 (the 2013 edition).</p>
<p>Don&apos;t be affraid the post seems longer than it will take time. I made a lot of screenshots since there are a few traps you need</p>]]></description><link>http://blog.sourceprojects.org/2013/11/01/installing-ubuntu-13-10-on-a-macbook-air-6-2/</link><guid isPermaLink="false">5994137b6b066a0afbf48009</guid><category><![CDATA[macbook air]]></category><category><![CDATA[air]]></category><category><![CDATA[apple]]></category><category><![CDATA[efi]]></category><category><![CDATA[boot]]></category><category><![CDATA[installation]]></category><category><![CDATA[ubuntu]]></category><category><![CDATA[mac]]></category><category><![CDATA[macbook]]></category><category><![CDATA[linux]]></category><category><![CDATA[refind]]></category><category><![CDATA[refit]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Fri, 01 Nov 2013 19:34:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Hey there,</p>
<p>Today I will show you how to install Ubuntu 13.10 on a Macbook Air 6.2 (the 2013 edition).</p>
<p>Don&apos;t be affraid the post seems longer than it will take time. I made a lot of screenshots since there are a few traps you need to be aware of!</p>
<p><strong>Important to note I&apos;m not responsible for any problem or damage this guide will cause on your Macbook. You are highly recommended to backup all of your data before moving on so that you can restore everything in the unlikely event of data lost or partition damage!</strong></p>
<p><em>Another note to give upfront the installation:  Ubuntu (at the actual moment) will not work installed in BIOS mode. The kernel will just hang after installation will activating the CPU cores. You can make it boot using nosmp kernel parameter but I&apos;ll wanted it to be a clean install so lets use EFI mode.</em></p>
<p>So let&apos;s start with downloading Ubuntu and while it is downloading we will prepare our Macbook Air :-)</p>
<p>I use a separate Windows computer for preparing the USB stick but MacOS X and Linux are fine as well, just do the corresponding operations! Btw sorry for German screenshots but I guess you&apos;ll find the approiciate buttons :-)</p>
<p>So go to www.ubuntu.com/download/desktop and download the desktop image. Important to note here is to download the 64bit edition since the 32bit edition will not boot in EFI mode (as far as I know - I have 8GB RAM so I even don&apos;t care about 32bit).</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/download-ubuntu.jpg" alt="download-ubuntu" loading="lazy"></p>
<p>While Ubuntu image is downloading move on with preparing the Macbook.</p>
<p>For Ubuntu to be selectable and able to start we need to install a bootmanager. Whereas over the last years rEFIt was the bootmanager of choice the development was stopped and the recommmended one now is the forked project rEFInd. We can download it from www.rodsbooks.com/refind/getting.html.</p>
<p><strong>Please do not forget to donate if you like it :-)</strong></p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/refind-download1.png" alt="refind-download1" loading="lazy"></p>
<p>When the rEFInd download is finished double click the ZIP file to unpack it next to it.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/refind-unzip.png" alt="refind-unzip" loading="lazy"></p>
<p>Now open a terminal window and change directory to your new rEFInd dir using:</p>
<pre><code>cd Downloads/refind-bin-0.7.4
</code></pre>
<p>To finally install refind to your partition enter:</p>
<pre><code>./install.sh
</code></pre>
<p>You will be asked for your password to allow the installer to install itself to the EFI boot partition, enter it and wait for the installer to complete. If everything is fine you should see an output nearly similar to the following, ending with the message that it was successfully installed:</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/refind-install.png" alt="refind-install" loading="lazy"></p>
<p>After installtion of rEFInd we should make sure that it will find our USB stick as an available boot option for EFI. To do that open up <code>/EFI/refind/refind.conf</code> using a texteditor (I prefer Sublime Text) and search for the line starting with &quot;scanfor&quot; and change it to the following as shown in the screenshot and save the file:<br>
<code>scanfor internal,external,optical,manual</code></p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/refind-config.png" alt="refind-config" loading="lazy"></p>
<p>Next step is to prepare our harddisk to have some space for Ubuntu. Therefor I prefer to use the <code>Disk Utility</code>. You can find it in <code>Applications/Utilities</code>. Open it up and resize your <code>Macintosh HD</code> partition to a size you like. Since I have a 250GB SSD I&apos;ll make the Mac partition 150GB, giving me 100GB free space for Ubuntu. To write the information to disk and let the Disk Utitlity resize your partition just hit <code>Apply</code>.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/macos-resize-partition.png" alt="macos-resize-partition" loading="lazy"></p>
<p>Ok back to our Ubuntu download which hopefully is now ready while we&apos;re waiting for the Disk Utility to finish the resizing.<br>
We now need to unpack the Ubuntu ISO file while I prefer to use 7zip (7-www.zip.org) you can use any program able to unpack ISO files.</p>
<p>Now we prepare our USB stick to be able to boot from the rEFInd bootmanager. Therefor we format it using FAT32. My USB stick has 8GB capacity but every USB stick with at least 1GB should be fine.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/format-usb-stick.jpg" alt="format-usb-stick" loading="lazy"></p>
<p>Next we need to get the extracted files to our USB stick which is nothing more than copy&amp;paste. Every programmer is capable of that ;-)</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/copy-content.jpg" alt="copy-content" loading="lazy"></p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/paste-content.jpg" alt="paste-content" loading="lazy"></p>
<p>The copy operation can take some time as we all know so let&apos;s wait a few moments and maybe get a new Coke :-)</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/copy-content-wait.jpg" alt="copy-content-wait" loading="lazy"></p>
<p>When copying all files is done, the last thing we need to change for Ubuntu to start from the USB stick is to change a little bit in the GRUB configuration file (so the Linux bootloader).</p>
<p>Open the <code>boot/grub/grub.cfg</code> file on the USB stick using your favorit texteditor (e.g. Notepad++) and change the following lines to look like this as seen in the screenshot:</p>
<pre><code>menuentry &quot;Try Ubuntu without installing&quot; {
	set root=(hd0,msdos1)
	set gfxpayload=keep
	linux  /casper/vmlinuz.efi file=/cdrom/preseed/ubuntu.seed boot=casper quiet splash locale=de_DE bootkbd=de console-setup/layoutcode=de --
	initrd /casper/initrd.lz
}
</code></pre>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/grub-config.jpg" alt="grub-config" loading="lazy"></p>
<p>Store the changes and savely remove the USB stick from your computer. Put it into the Macbook Air and it&apos;s time to restart it :-)</p>
<p>When ariving at the new menu at the Bootmanager select <code>EFI/boot/grubx64.efi</code> and press <code>Enter</code> (ignore the first two options from the screenshot as it is not yet existing for your installation :-)).</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/IMG_2543.JPG" alt="IMG_2543" loading="lazy"></p>
<p>If not yet selected just select <code>Try Ubuntu without installing</code> and press <code>Enter</code>. Ubuntu will start as normal but is already running in EFI mode.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/IMG_2548.JPG" alt="IMG_2548" loading="lazy"></p>
<p>We see Ubuntu to successfully startup and showing a desktop just as expected.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/ubuntu-after-bootup.png" alt="ubuntu-after-bootup" loading="lazy"></p>
<p>Sometimes (it seems that the last firmware update broke it) the installer is not able to find the wireless lan card yet (this does not happen after installation) so just plug in the wired lan adapter and reboot the installer.</p>
<p>If a network connection exists you are ready to install Ubuntu by double clicking the <code>Install Ubuntu</code> icon. The installer will show up.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/ubuntu-everythings-fine.png" alt="ubuntu-everythings-fine" loading="lazy"></p>
<p>When the installer asks you on how to install Ubuntu be aware to choose <code>Something else</code>.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/ubunut-something-else.png" alt="ubunut-something-else" loading="lazy"></p>
<p>You are now able to add partitions for Ubuntu on the free space that you previously created using the Disk Utility from MacOS X. I prefer to use logical partitions but you&apos;ll free to add them as primary.<br>
First create a ext4 formatted partition on by selecting the free space at the end of the disk and clicking the + button and select as the mountpoint / (the root directory). Since I have 100GB of free space I make my root partition 98GB, having 2GB for SWAP.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/ubuntu-ext4.png" alt="ubuntu-ext4" loading="lazy"></p>
<p>Create the new partition using a click on OK and create another partition by selecting the free space at the end ab clicking +. This time select <code>Use as: swap area</code> and click OK.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/ubuntu-swap.png" alt="ubuntu-swap" loading="lazy"></p>
<p>Before we proceed with the Installer be sure to set the <code>installation point for the bootloader</code> to your Ubuntu root partition and that the first partition is recognized as <code>efi</code> as shown in the screenshot.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/ubuntu-efi-bootloader.png" alt="ubuntu-efi-bootloader" loading="lazy"></p>
<p>Now the hard part is done and you can proceed with the installation as normal. Depending on your selection in the first dialog the installer may download additional dependencies or language packs.</p>
<p>After the installation is finished the installer wants to restart your computer. This may fail but this isn&apos;t a problem at all - in this case just force a reboot by holding the Power key for a few seconds.</p>
<p>Eventually you should see the Ubuntu selection in the rEFInd bootmanager as seen below and you are able to start Ubuntu.</p>
<p><img src="http://blog.sourceprojects.org/content/images/2017/08/IMG_2553.JPG" alt="IMG_2553" loading="lazy"></p>
<p>If everything is fine:<br>
<strong>CONGRATULATIONS</strong></p>
<p><strong>You&apos;d successfully installed Ubuntu on your Macbook Air :-)</strong></p>
<p><strong>Update  2014-03-06:</strong><br>
<a href="https://help.ubuntu.com/community/MacBookAir6-2/Saucy?ref=blog.sourceprojects.org">https://help.ubuntu.com/community/MacBookAir6-2/Saucy</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Writing a Hazelcast / CastMapR MapReduce Task in Java]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Hazelcast is a distributed In-Memory-Datagrid written in Java. In addition to the internal features like EntryProcessors and queries you can write MapReduce tasks using the CastMapR projects which adds MapReduce capabilities on top of Hazelcast 3.x.</p>
<p>To make it comparable to other MapReduce frameworks we will try to reimplement</p>]]></description><link>http://blog.sourceprojects.org/2013/08/25/writing-a-hazelcast-castmapr-mapreduce-task-in-java/</link><guid isPermaLink="false">599412be6b066a0afbf48001</guid><category><![CDATA[castmapr]]></category><category><![CDATA[hazelcast]]></category><category><![CDATA[datagrid]]></category><category><![CDATA[distributed]]></category><category><![CDATA[hadoop]]></category><category><![CDATA[in-memory]]></category><category><![CDATA[mapreduce]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Sun, 25 Aug 2013 15:44:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Hazelcast is a distributed In-Memory-Datagrid written in Java. In addition to the internal features like EntryProcessors and queries you can write MapReduce tasks using the CastMapR projects which adds MapReduce capabilities on top of Hazelcast 3.x.</p>
<p>To make it comparable to other MapReduce frameworks we will try to reimplement the business case from [1] but instead of using the CSV files directly we first load the data inside our Hazelcast In-Memory-Datagrid - storing them inside of a com.hazelcast.core.MultiMap which is very similar to the Google Guavas Multimaps [2].</p>
<p>For now we start with creating our project. Using Maven this is very easy and you just have to include two dependencies into the Maven project.</p>
<pre><code>&lt;dependency&gt;
    &lt;groupid&gt;com.hazelcast&lt;/groupid&gt;
    &lt;artifactid&gt;hazelcast&lt;/artifactid&gt;
    &lt;version&gt;3.0&lt;/version&gt;
&lt;/dependency&gt;
&lt;dependency&gt;
    &lt;groupid&gt;com.noctarius.castmapr&lt;/groupid&gt;
    &lt;artifactid&gt;castmapr&lt;/artifactid&gt;
    &lt;version&gt;1.0.0&lt;/version&gt;
&lt;/dependency&gt;
</code></pre>
<p>Now we have all the dependencies we need to start Hazelcast nodes and build our MapReduce tasks in Java.<br>
The full sources can be found here [3] so we will leave out everything that is not needed to fulfill the MapReduce task.</p>
<p>So let&apos;s move on with the Mapper implementation. We want to search for all translations containing the searchTerm which in case is an English word. Here&apos;s the Mapper implementation which will be distributed around in the cluster.</p>
<pre><code>public class DictionaryMapper extends
  Mapper&lt;..&gt; implements DataSerializable {
 
  private String searchTerm;
 
  public DictionaryMapper() {
  }
 
  public DictionaryMapper(String searchTerm) {
    if (searchTerm == null)
      throw new NullPointerException(
        &quot;searchTerm must not be null&quot;);
    this.searchTerm = searchTerm.toLowerCase();
  }
 
  @Override
  public void map(String key, DictionaryEntry value,
      Collector&lt;..&gt; collector) {
 
    if (key == null)
      return;
    if (key.toLowerCase().contains(this.searchTerm)) {
      collector.emit(key, value.getValue());
    }
  }
 
  @Override
  public void writeData(ObjectDataOutput out)
      throws IOException {
 
    out.writeUTF(searchTerm);
  }
 
  @Override
  public void readData(ObjectDataInput in)
      throws IOException {
 
    searchTerm = in.readUTF();
  }
}
</code></pre>
<p>The Mapper is initialized with the search term we&apos;re searching for and and the map-method looks for if searchTerm is contained in the current entries key, if so we emit the value using the key which means one search term can be found for different keys.<br>
Next we need a Reducer which will reduce all found translations together to one string (just for convience).</p>
<pre><code>public class DictionaryReducer implements Reducer&lt;..&gt; {
 
  @Override
  public String reduce(String key, Iterator&lt;..&gt; values) {
    StringBuilder sb = new StringBuilder();
    while (values.hasNext()) {
      String value = values.next();
      sb.append(value).append(&quot;|&quot;);
    }
    String result = sb.toString();
    return result.substring(0, result.length() - 1);
  }
}
</code></pre>
<p>The last thing we need to do is to build a MapReduceTask and eventually retrieve the results.</p>
<pre><code>MapReduceTaskFactory factory = MapReduceTaskFactory
  .newInstance(hz);
MapReduceTask&lt;..&gt; task = factory.build(dictionary);
 
Map&lt;..&gt; result = task
  .mapper(new DictionaryMapper(searchTerm))
  .reducer(new DictionaryReducer()).submit();
 
if (result.size() == 0) {
  System.out.println(&quot;No translation found for &apos;&quot;
      + search + &quot;&apos;&quot;);
} else {
  System.out.println(&quot;Translations found for &apos;&quot;
      + search + &quot;&apos;:&quot;);
  for (Entry&lt;..&gt; entry : result.entrySet()) {
    System.out.println(entry.getKey() + &quot;: &quot;
        + entry.getValue());
  }
}
</code></pre>
<p>So we emit a MapReduce search for searchTerm by instantiating a Mapper and Reducer and configuring the MapReduceTask. Finally call submit() to use the blocking version of the task execution and look for results after returning the result.<br>
To start the example we need to commandline options, the first one tells the program to wait for x nodes to come up (for a simple test this should be 1) and the second one is the searchTerm to search for.</p>
<p>As seen above it&apos;s easy to create and execute MapReduce tasks using CastMapR on Hazelcast 3.x. Full sourcecode is mostly containing the code to startup Hazelcast nodes, retrieving translation files from the Internet and filling up the MultiMap. In general Hazelcast instances are already running and data won&apos;t need to be filled in into the Datagrid but are already available so this code is not needed.</p>
<p>For further features and questions just have a look at the Github project [4].</p>
<p>[1] <a href="http://www.javacodegeeks.com/2013/08/writing-a-hadoop-mapreduce-task-in-java.html?ref=blog.sourceprojects.org">http://www.javacodegeeks.com/2013/08/writing-a-hadoop-mapreduce-task-in-java.html</a><br>
[2] <a href="https://code.google.com/p/guava-libraries/wiki/NewCollectionTypesExplained?ref=blog.sourceprojects.org#Multimap">https://code.google.com/p/guava-libraries/wiki/NewCollectionTypesExplained#Multimap</a><br>
[3] <a href="https://www.sourceprojects.org/default/files/dictionary.zip?ref=blog.sourceprojects.org">https://www.sourceprojects.org/default/files/dictionary.zip</a><br>
[4] <a href="https://github.com/noctarius/castmapr?ref=blog.sourceprojects.org">https://github.com/noctarius/castmapr</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[java-forum.org - Riding a dead horse - Or how fast you can kill a community]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Today I want to coder a bit of a different topic so please read the following text and I appreciate ideas and opinions.</p>
<p>Not yet 60 hours ago a bad time for the german Java community started. A new owner took over the java-forum.org - the best reputated and</p>]]></description><link>http://blog.sourceprojects.org/2013/08/01/java-forum-org-riding-a-dead-horse/</link><guid isPermaLink="false">599412426b066a0afbf47ff3</guid><category><![CDATA[java]]></category><category><![CDATA[developer]]></category><category><![CDATA[engineer]]></category><category><![CDATA[forenbrands]]></category><category><![CDATA[forum]]></category><category><![CDATA[java-forum.org]]></category><category><![CDATA[moderator]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Thu, 01 Aug 2013 06:55:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Today I want to coder a bit of a different topic so please read the following text and I appreciate ideas and opinions.</p>
<p>Not yet 60 hours ago a bad time for the german Java community started. A new owner took over the java-forum.org - the best reputated and best valued Java forum / community in Germany and the german speaking areas (and partly over these borders too).</p>
<p>In general there is no problem about changing the ownership, this often happens interference or problems but not in this case.</p>
<p>Shortly (just few hours) after his first message the new owner paved the forum with lots of different kinds of commercials / advertisments. It wasn&apos;t just what you would expect to see in a Java forum like some really great java products, it was way more like automobile commercials (mobile.de) or even pornographic stuff (like hot blonde polish girls).</p>
<p>What was to be expected most people were not as happy as he may thought and a big shitstorm started and he totally got lost. In the last not yet 60 hours  he deleted / banned nearly all longtime, string experienced users and moderators, took the rights of the old co-admin (even without any comment) and finally (this night) temporarily closed the forum.</p>
<p>According to his current profile he claims himself as a &quot;professional forum hoster&quot; and has lots of different topic forums. Most of them are paved with commercials and partly they forbid to enter the forum with an adblocker as well. He&apos;s a typical forum grabber and has absolutely no experience in Java or any technical topic.<br>
And that&apos;s his problem, for now he never took over a highly technical forum and so he could phrase all his standard sentences like &quot;commercials need to clearing the costs&quot; and the people may believed him. That was different for us. Many of us have servers on their own, many have big communities and most of us know prices of hosted servers and what revenue of advertisments can have for a big, active community with good reputation. So most of us just doesn&apos;t believed him.</p>
<p>Many of the deleted / banned people for now found a new / temporary home at www.byte-welt.de but there&apos;s still a bad taste in all that.</p>
<p>We&apos;ll keep up finding a solution about what will happen next.</p>
<p>PS: If you want to know who is the new owner: <a href="https://www.xing.com/companies/forenbrands?ref=blog.sourceprojects.org">https://www.xing.com/companies/forenbrands</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[CastMapR - The Hazelcast 3 MapReduce Framework]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>A few days ago while porting our current system to Hazelcast 3 Snapshots I finally decided to start a MapReduce implementation for Hazelcast which I was missing for a long time.</p>
<p>Whereas there always was a way to query IMaps in a distributed manner using Predicates I missed a solution</p>]]></description><link>http://blog.sourceprojects.org/2013/07/06/castmapr/</link><guid isPermaLink="false">5994116d6b066a0afbf47fe6</guid><category><![CDATA[hazelcast]]></category><category><![CDATA[mapreduce]]></category><category><![CDATA[castmapr]]></category><category><![CDATA[datagrid]]></category><category><![CDATA[distributed]]></category><category><![CDATA[hadoop]]></category><category><![CDATA[in-memory]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Sat, 06 Jul 2013 18:25:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>A few days ago while porting our current system to Hazelcast 3 Snapshots I finally decided to start a MapReduce implementation for Hazelcast which I was missing for a long time.</p>
<p>Whereas there always was a way to query IMaps in a distributed manner using Predicates I missed a solution for doing calculations / conversions on remote cluster nodes as possible with MapReduce and so I started it.</p>
<p>Thinking about the API to use I came to the solution to make it very similar to that of Redhat&apos;s Infinispan which is widely known and known to work.</p>
<p>Currently the implementation features support for IMap but will be extended to IList, ISet, MultiMap and eventually the new NestedMaps (when arriving in the wild world).</p>
<p>The project itself is (as always) available licensed using Apache License 2.0 and is hosted on GitHub <a href="https://github.com/noctarius/castmapr?ref=blog.sourceprojects.org">https://github.com/noctarius/castmapr</a></p>
<p>But now enough of the words and have a look what a simple example will look like:</p>
<pre><code>public class CastMapRDemo
{
  public static void main(String[] args)
  {
    HazelcastInstance hazelcast = nodeFactory
        .newHazelcastInstance();
    IMap map = hazelcast.getMap( &quot;PlayerLogins&quot; );
     
    MapReduceTaskFactory factory = MapReduceTaskFactory
        .newInstance( hazelcast );
     
    MapReduceTask task = factory.build( map );
     
    Map loginCounts = task.mapper( new PlayerMapper() )
        .reducer( new LoginReducer() ).submit();
          
    for ( Entry entry : loginCounts.entrySet() )
    {
      System.out.println( &quot;Player &quot; + entry.getKey() 
        + &quot; has &quot; + entry.getValue() + &quot; logins.&quot; );
    } 
  }
}
 
public class PlayerMapper extends Mapper
{
  public void map( Integer playerId, Long timestamp,
                   Collector collector )
  {
    // We are interested in the count of player logins so
    // we discard the timestamp information
    collector.emit( playerId, 1 );
  }
}
 
public class LoginReducer implements DistributableReducer
{
  public void reduce( Integer playerId, Iterator values )
  {
    int count = 0;
    while ( values.hasNext() )
    {
      values.next();
      count++;
    }
    return count;
  }
}
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Property Accessors - 2]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h3 id="heresashortupdate">Here&apos;s a short update:</h3>
<p>At the moment there&apos;s not much to tell but I&apos;d found at lot of interest in properties support in Java and I&apos;m glad to see people like the general idea.</p>
<p>The first discussion started mostly about how the</p>]]></description><link>http://blog.sourceprojects.org/2013/01/07/property-accessors-2/</link><guid isPermaLink="false">599410076b066a0afbf47fc1</guid><category><![CDATA[java]]></category><category><![CDATA[access]]></category><category><![CDATA[accessor]]></category><category><![CDATA[principle]]></category><category><![CDATA[property]]></category><category><![CDATA[unified]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Mon, 07 Jan 2013 20:09:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="heresashortupdate">Here&apos;s a short update:</h3>
<p>At the moment there&apos;s not much to tell but I&apos;d found at lot of interest in properties support in Java and I&apos;m glad to see people like the general idea.</p>
<p>The first discussion started mostly about how the syntax should look like and why defining a new one if there are already languages with properties syntax out there.</p>
<p>I collected some of the current available syntax examples and there seems to be no clear, short and standard syntax out there. Everyone defined something on his own and the most impressive realization is that the syntax of JavaScript is more similar to C# than to ActionScript although those both languages are related (ECMAscript).</p>
<p>Here are some examples I collected:</p>
<pre><code>AS3:
public function get color():int { ... }
public function set color(value:int):void { ... }

C#:
private int color;
public int color {
  get {
    ...
  }
  set {
    ...
  }
}

Ruby:
attr_accessor :color

C++:
Well no real kind of property but could be emulated using operator overloading

MS C++:
_declspec(property(get = getprop, put = putprop)) int the_prop;

D:
private int m_color;
@property public int color() { ... }
@property public int color(int value) { ... }

Delphi had some way but I don&apos;t remeber it (thanks to god I finally forgot about Delphi).

JS:
color: {
  get: function() { ... },
  set: function(value) { ... }
}

Objective-C:
@property int *color;
@synthesize color;

PHP:
function __get($property) { ... }
function __set($property, $value) { ... }
</code></pre>
<h3 id="sowhatdoyouguysthinkwouldbeanicesyntaxtobeimplemented">So what do you guys think would be a nice syntax to be implemented?</h3>
<p>The second point of the discussion is about introducing some new keywords and the explanation about how hard it was to add the enum keyword in Java 5.</p>
<p>So I suggested to to extend the field syntax of public / protected members the same way my proposal did with the new property keyword. To auto-generate getters / setters there needs to be found a solution. One possibility could be an annotation @java.lang.Property (see below) or similar but that would mean that old code can not automatically benefit because it&apos;s not annotated.</p>
<p>So about the annotation itself it could look similar to the following example:</p>
<pre><code>public @interface Property {
  boolean writable() default true;
  boolean readable() default true;
}
</code></pre>
<p>So far about the state of the discussion.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Property Accessors - a short introduction]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>As I stated yesterday I started the discussion about adding Property Accessors to Java. Today I want to show some more about my thoughts.</p>
<p>There are at least three questions?</p>
<ul>
<li>What are Property Accessors?</li>
<li>Why Java needs them?</li>
<li>What could they look like?</li>
</ul>
<h3 id="whatarepropertyaccessors">What are Property Accessors?</h3>
<p>So starting with</p>]]></description><link>http://blog.sourceprojects.org/2013/01/05/property-accessors-a-short-introduction/</link><guid isPermaLink="false">59940e866b066a0afbf47fc0</guid><category><![CDATA[access]]></category><category><![CDATA[accessor]]></category><category><![CDATA[java]]></category><category><![CDATA[property]]></category><category><![CDATA[properties]]></category><category><![CDATA[unified]]></category><dc:creator><![CDATA[Christoph Engelbert]]></dc:creator><pubDate>Sat, 05 Jan 2013 14:22:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>As I stated yesterday I started the discussion about adding Property Accessors to Java. Today I want to show some more about my thoughts.</p>
<p>There are at least three questions?</p>
<ul>
<li>What are Property Accessors?</li>
<li>Why Java needs them?</li>
<li>What could they look like?</li>
</ul>
<h3 id="whatarepropertyaccessors">What are Property Accessors?</h3>
<p>So starting with the first question &#x201C;What are Property Accessors?&#x201D; we&apos;ll begin with a quite easy explanation:<br>
At first Property Accessors are a bit of syntactic sugar to avoid writing Getters and Setters manually.<br>
As the second point they bring us a bit more in the direction of the &#x201C;Uniform Access Principle&#x201D; to Java. The UAP was defined by Bertrand Meyer and describes that there is no difference between access to members, attributes or methods. There is a quite nice Wikipedia article [1] describing the main aspects of what the UAP looks like.<br>
And last but not least it can give legacy code a chance to be made safer without access to the accessing codebase, I&apos;ll show later what this means.</p>
<p>Now let&apos;s see a basic pseudocode example of the UAP in work:</p>
<pre><code>class Egg {
  property int color
}
</code></pre>
<p>What we see is a really simple class called Egg defining a property color of type int.<br>
According to the UAP we can read / write that property the same way as a field but even as a method:</p>
<pre><code>Egg egg = new Egg
egg.color = 0xFF00FF
print egg.color
egg.color( 0x0000FF )
print egg.color()
</code></pre>
<p>Both ways are totally equivalent and exchangeable. So you see no difference in field or method access.</p>
<p>Adding the UAP in Java would give you a third possibility &#x2013; the  JavaBeans standard to be backwards compatible with existing libraries not capable of accessing properties.</p>
<pre><code>Egg egg = new Egg
egg.setColor( 0x00FFFF )
print egg.getColor()
</code></pre>
<p>I guess that should work as a short introduction in what Property Accessors mean to me.</p>
<h3 id="whyjavaneedsthem">Why Java needs them?</h3>
<p>This brings us to the next question &#x201C;Why Java needs them?&#x201D;.</p>
<p>That one is a bit hard to answer since I guess there will be multiple reasons for different kinds of persons.</p>
<p>The first reason we already know: I&apos;m pretty sure nobody would deny that the UAP is a good pattern to write expressive code.</p>
<p>The second reason I had mentioned earlier, too, legacy codebase that should be made stronger against attacks or mistakes. So let&apos;s have a deeper look into what this means.</p>
<p>Imagine a world where there is legacy code, I know this does not exists but still think about it, and the legacy codebase is ugly and using &#x201C;patterns&#x201D; like direct fieldaccess. Isn&apos;t that an unlikely thought but to make it even more worse: When the field was created the creator thought about it to be in range 0 to 100 but as &#x201C;speed&#x201D; was important he decided to dismiss safety for speed and denied to use encapsulation. The codebase was made public and a lot of other people (even external companies) made use of this field.<br>
Some day a bug was filed describing the internal behavior of the surrounding class was wrong. After hours of analyzing the code the only possibility could be: someone sets the field to a value outside of the legal range.</p>
<p>How can we prevent such a problem with a lot of adopters out there?</p>
<ul>
<li>We test the range right before every use of the fields value</li>
<li>We encapsulate it, providing a Getter and Setter and break backward compatibility.</li>
</ul>
<p>In most cases the second step is not a real option (maybe with changing the major version) and the first option prevents the problem at the wrong position. It already happened that the field has a wrong value but you&apos;ll never find out who made it a problem.</p>
<p>Using Property Accessors you make the field a property and override the standard Setter using a checking variant. In this case the access seems to be the same as before but you feature the additional behavior of a Setter.</p>
<p>And that leads us to the third question &#x201C;What could they look like?&#x201D;</p>
<h3 id="whatcouldtheylooklike">What could they look like?</h3>
<p>Before I go on I&apos;ll just want to mention that the following examples are just drafts and the syntax is subject of discussion (especially for array index accessors which I do not really like that way they currently noted).</p>
<p>So let&apos;s start with our Egg as an example similar to yesterdays post on what properties will be made to by the compiler.</p>
<pre><code>public Class Egg {
  property int color;
}
</code></pre>
<p>The compiler would now infer the accessors right to what I described above:</p>
<pre><code>public Class Egg {
  private int color;

  public void color(int color) { this.color = color; }
  public int color() { return this.color; }
  public void setColor(int color) -&gt; Egg::color;
  public int getColor() -&gt; Egg::color;
}
</code></pre>
<p>So the compiler generates the missing methods so that old code could use it as before.</p>
<p>To come back to our earlier problem the old &#x201C;public field&#x201D; we could change public to property and override the standard Setter like this:</p>
<pre><code>public Class Egg {
  property int color {
    (value) -&gt; {
      if (value &lt; 0 || value &gt; 100)
        throw new IllegalArgumentException(&#x201C;value out of range&#x201D;);
      this.color = value;
    }
  }
}
</code></pre>
<p>This example uses a Java 8 Lambda syntax to define a new Setter that checks the range of the field. Combined with the UAP we can use it like the fieldaccess before:</p>
<pre><code>Egg egg = new Egg();
egg.color = 80;  // This one will work as expected
egg.color = 101; // This one throws an exception
</code></pre>
<p>But this is not the only good thing Property Accessors could give us, there are plenty of options out there.<br>
So had you ever been annoyed of copying arrays to prevent unexpected external changes?</p>
<pre><code>public class ArrayStore {
  private final int[] store;

  public ArrayStore(int[] store) {
    this.store = Arrays.copyOf(store);
  }

  public int[] getStore() { return Arrays.copyOf(store); }
}
</code></pre>
<p>What if we could make the returned array some kind of &#x201C;write-protected&#x201D;?</p>
<pre><code>public class ArrayStore {
  property final int[] store {
    (index, value) -&gt; {
      throw new IllegalAccessViolationException(&#x201C;not allowed&#x201D;);
    }
  }

  public ArrayStore(int[] store) {
    this.store = Arrays.copyOf(store);
  }
}
</code></pre>
<p>This would prevent external code from altering the returned array but gives you full access internally of the declaring class. Also you could add an access check using SecurityManager or whatever you want to.</p>
<p>As mentioned before, there are plenty of options on how to use Property Accessors.</p>
<p>For now this should be a good overview of the vision behind the Property Accessor proposal and I&apos;ll be glad to answer any of your questions or see you joining the discussion on java.net [2].</p>
<p>The next steps would be to define a clear document with a lot more examples and to bring the idea to the form of an JSR or JEP and possibly making some prototype.<br>
For this step I&apos;m searching for highly engaged people loving this idea and want to help moving it forward into a possible candidate for a language addition of Java.</p>
<p>So if you want to help or maybe already taking place in the definition of a JSR / JEP and like the idea it would be great to contact me at G+ [3] or by mail [4].</p>
<p>Links<br>
[1] <a href="http://en.wikipedia.org/wiki/Uniform_access_principle?ref=blog.sourceprojects.org">http://en.wikipedia.org/wiki/Uniform_access_principle</a><br>
[2] <a href="http://www.java.net/forum/topic/jcp/general-jsr-discussion/properties-proposal?ref=blog.sourceprojects.org">http://www.java.net/forum/topic/jcp/general-jsr-discussion/properties-proposal</a><br>
[3] <a href="https://plus.google.com/u/0/114622570438626215811?ref=blog.sourceprojects.org">https://plus.google.com/u/0/114622570438626215811</a><br>
[4] noctarius at apache org</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>